Experimental Results Related to Discrete Nonlinear Schrödinger
Research Article Adaptive Control of Nonlinear Discrete ...
Transcript of Research Article Adaptive Control of Nonlinear Discrete ...
Research ArticleAdaptive Control of Nonlinear Discrete-Time Systems byUsing OS-ELM Neural Networks
Xiao-Li Li Chao Jia De-xin Liu and Da-wei Ding
School of Automation and Electrical Engineering and the Key Laboratory of Advanced Control of Iron andSteel Process (Ministry of Education) University of Science and Technology Beijing Beijing 100083 China
Correspondence should be addressed to Xiao-Li Li lixiaolihotmailcom
Received 11 April 2014 Accepted 23 April 2014 Published 22 May 2014
Academic Editor Bo Shen
Copyright copy 2014 Xiao-Li Li et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
As a kind of novel feedforward neural network with single hidden layer ELM (extreme learning machine) neural networks arestudied for the identification and control of nonlinear dynamic systems The property of simple structure and fast convergence ofELM can be shown clearly In this paper we are interested in adaptive control of nonlinear dynamic plants by usingOS-ELM (onlinesequential extreme learning machine) neural networks Based on data scope division the problem that training process of ELMneural network is sensitive to the initial training data is also solved According to the output range of the controlled plant the datacorresponding to this rangewill be used to initialize ELM Furthermore due to the drawback of conventional adaptive control whenthe OS-ELM neural network is used for adaptive control of the system with jumping parameters the topological structure of theneural network can be adjusted dynamically by using multiple model switching strategy and an MMAC (multiple model adaptivecontrol) will be used to improve the control performance Simulation results are included to complement the theoretical results
1 Introduction
Modeling and controlling of nonlinear systems are alwaysthe main research field in control theory and engineering[1ndash3] great progresses in this field have been made in thepast 30 years but many problems still exist In the actualindustrial process exact mathematical models are alwaysvery difficult to get even with the existence of huge numbersof input-output or state data Therefore data based controlof nonlinear systems has been concerned especially in recent10 years Artificial neural network has been recognized as apowerful tool for the identification and control of nonlinearsystems since it was applied to nonlinear system [4] Numer-ous learning algorithms have been proposed for the trainingof the network among which BP algorithm [5] and RBFalgorithm are used very frequently But it is generally knownthat the BP faces some bottlenecks such as trivial manualparameters tuning (leaning rate learning epochs etc) localminima and poor computational scalability Recently anew learning algorithm for single-hidden-layer feedforwardneural networks (SLFNs) named extreme learning machine
(ELM) has been proposed by Huang et al [6 7] The essenceof ELM is that input weights and bias of hidden neurons aregenerated randomly the hidden layer of SLFNs need not betuned [8] Furthermore based on least-square scheme thesmallest norm solution can be calculated analytically Giventhat ELM algorithm is free of trivial iterative tuning andlearning process is implemented through simple generalizedinverse operation it can achieve a much faster learning speedthan other traditional algorithms
The training of batch processing ELM algorithms can berealized only after all the training samples are ready But inactual application the training data may come one by one orchunk by Huang et al [8] Therefore Liang et al proposedthe OS-ELM [9] algorithmwhich can adjust its output weightadaptively when new observations are received Unlike thebatch learning algorithms as soon as the learning process forparticular training data is complemented the data will be dis-carded thus it avoids retraining of the previous observations[8]
To the best of our knowledge a large number ofresearches about ELM are focused on the regression and
Hindawi Publishing CorporationAbstract and Applied AnalysisVolume 2014 Article ID 267609 11 pageshttpdxdoiorg1011552014267609
2 Abstract and Applied Analysis
classification [10ndash12] There are only few of published resultsregarding identification and control of nonlinear dynamicsystems In this paper our interest will be kept in the identifi-cation and control of nonlinear dynamic plants by using OS-ELM neural networks The adaptive controller based on OS-ELMneural networkmodel can be constructed In simulationprocess on one hand since the training process of OS-ELM neural network is sensitive to the initial training datadifferent regions of the initial data can cause large diversityof control result Thus large numbers of initial data can beclassified previously into multiple regions Then accordingto the range of expected output of the controlled plant wecan choose the corresponding initial data to initialize OS-ELM neural networks On the other hand multiple modelswitching strategy will be used for the adaptive control byusing OS-ELM neural network when a system with jumpingparameters is controlled Based on the two aspects which aremaintained in the above OS-ELM algorithm is extended tothe field of control
The paper is organized as follows The structure ofELM neural network and OS-ELM algorithm are introducedin Section 2 Then adaptive control of nonlinear dynamicsystem by using OS-ELM is stated in Section 3 Consideringthat OS-ELM is sensitive to the initial training data therelationship between control performance and the selectionof the initial training data is discussed for OS-ELM neuralnetwork furthermore simulation results about OS-ELMcontrol and multiple OS-ELMmodels control are also shownin Section 4 Finally some conclusions are drawn in Section 5
2 ELM Neural Networks
Let us consider standard single-hidden-layer feedforwardneural networks (SLFNs) with 119871 hidden nodes and activationfunction 119866(a
119894 119887119894 x) the structure is shown in Figure 1 Given
119873 arbitrary distinct samples (x119894 t119894)119873
119894=1sub R119899 timesR119898 then the
network can be mathematically modeled as
119891119871 (x) =
119871
sum
119894=1
120573119894h119894 (x) =
119871
sum
119894=1
120573119894119866 (a119894 119887119894 x) (1)
where a119894is the weight vector connecting the 119894th hidden
neuron and the input neurons 120573119894is the weight vector
connecting the 119894th hidden neuron and the output neurons119887119894is the bias of the 119894th hidden neuron and 119866(a
119894 119887119894 x) denotes
the output function of the 119894th hidden node which is boundedTo approximate these 119873 samples (x
119894 t119894)119873
119894=1sub R119899 timesR119898
with zero error is to search for a solution of (a119894 119887119894) and 120573
119894
satisfying
119891119871(x119895) =
119871
sum
119894=1
120573119894119866(a119894 119887119894 x119895) = t119895
119895 = 1 2 119873 (2)
The above119873 equation can be written compactly as
H120573 = T (3)
whereH is the hidden-layer output matrix
H =[[
[
h (x1)
h (x119873)
]]
]
=[[
[
119866 (a1 1198871 x1) sdot sdot sdot 119866 (a
119871 119887119871 x1)
119866 (a1 1198871 x119873) sdot sdot sdot 119866 (a
119871 119887119871 x119873)
]]
]119873times119871
120573 =[[
[
120573119879
1
120573119879
119871
]]
]119871times119898
T =[[
[
t1198791
t119879119873
]]
]119873times119898
(4)
The Essence of ELM The hidden layer of ELM need not beiteratively tuned According to feedforward neural networktheory both the training error H120573 minus T2 and the norm ofweights 120573 need to be minimized [8]
21 Basic ELM The hidden node parameters (a119894 119887119894) remain
fixed after being randomly generatedThe training of a SLFNsis equivalent to find a least-square solution 120573 of the linearsystemH120573 = T that is
10038171003817100381710038171003817H minus T10038171003817100381710038171003817 = min
120573
1003817100381710038171003817H120573 minus T1003817100381710038171003817 (5)
The smallest norm least-squares solution of the above linearsystem is
= H+T (6)
where H+ can be obtained by H+ = (H119879H)minus1H119879 if H119879H is
nonsingular and H+ = H119879(HH119879)minus1 if HH119879 is nonsingular[8] To summarize the learning process can be demonstratedas the following
Basic ELM Algorithm One is given a training set
(x119894 t119894)119873
119894=1sub R119899timesR119898 output function119866(a
119894 119887119894 x) and
hidden node number 119871
Step 1 Randomly assign hidden node parameters (a119894 119887119894) 119894 =
1 119871
Step 2 Calculate the hidden-layer output matrixH
Step 3 Calculate the output weight 120573 using the equation 120573 =H+T where T = [t1 sdot sdot sdot t
119873]119879
22 OS-ELM As mentioned above in Section 21 one of thesolutions of the output weight vector 120573 is
120573 = (H119879H)minus1
H119879T (7)
This algorithm just needs one step learning Indeed basicELM algorithm is a batch learning algorithm However inactual application environment the training data may comeone by one or chunk by Huang et al [8] Therefore Lianget al proposed the OS-ELM [9] algorithm which can adjust
Abstract and Applied Analysis 3
1
1
m
i L
1 n
G(1198341b1x) G(119834i bi x) G(119834LbLx)
f(x)
x
1205731 120573i 120573L
Figure 1 Single-hidden-layer feedforward networks
its output weight 120573 adaptively when new observations arereceived The algorithm can be summarized as follows
OS-ELM Algorithm One is given a training set
119878 = (x119894 t119894) | x119894isin R119899 t
119894isin R119898 119894 = 1 output
function 119866(a119894 119887119894 x) and hidden node number 119871
Step 1 Initialization Phase From the given training set 119878 asmall chunk of training data 119878
0= (x
119894 t119894)1198730
119894=1is used to
initialize the learning where1198730ge 119871
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 119871(b) Calculate the initial hidden-layer output matrixH
0
(c) Calculate the initial output weight 1205730
= P0H1198790T0
where P0= (H1198790H0)minus1 and T
0= [t1 sdot sdot sdot t
1198730]119879
(d) Set 119896 = 0 where 119896 is the number of chunks which istrained currently
Step 2 Sequential Learning Phase (a) Present the (119896 + 1)th
chunk of new observations 119878119896+1
= (x119894 t119894)sum119896+1
119895=0119873119895
119894=(sum119896
119895=0119873119895)+1
here119873119896+1
denotes the number of observations in the (119896 + 1)thchunk
(b) Calculate the partial hidden-layer output matrixH119896+1
for the (119896 + 1)th chunk of data 119878119896+1
H119896+1
=
[[[[
[
119866(a1 1198871 x(sum119896
119895=0119873119895)+1
) sdot sdot sdot 119866 (a119871 119887119871 x(sum119896
119895=0119873119895)+1
)
sdot sdot sdot
119866(a1 1198871 xsum119896+1
119895=0119873119895) sdot sdot sdot 119866 (a
119871 119887119871 xsum119896+1
119895=0119873119895)
]]]]
]
(8)
Set T119896+1
= [t(sum119896
119895=0119873119895)+1
tsum119896+1
119895=0119873119895]119879
(c) Calculate the output weight
P119896+1
= P119896minus P119896H119879119896+1
(I +H119896+1
P119896H119879119896+1
)minus1
H119896+1
P119896
120573119896+1
= 120573119896+ P119896+1
H119879119896+1
(T119896+1
minusH119896+1
120573119896)
(9)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
It can be seen from the aboveOS-ELMalgorithm thatOS-ELM becomes the batch ELMwhen119873
0= 119873 For the detailed
description readers can refer to Liang et al [9] From (9) itcan be seen that theOS-ELM algorithm is similar to recursiveleast-squares (RLS) in [13] Hence all the convergence resultsof RLS can be applied here
3 Adaptive Control by Using OS-ELMNeural Networks
31 Identification Consider the following SISO nonlineardiscrete-time system
119910119896+1
= 1198910 (sdot) + 119892
0 (sdot) 119906119896minus119889+1 (10)
where1198910and 1198920are infinitely differentiable functions defined
on a compact set of 119865 sub R119899+119898 119866 sub R119899+119898 Consider
119910119896minus119899+1
119910119896 119906119896minus119889minus119898+1
119906119896minus119889
(11)
119910 is the output 119906 is the input119898 le 119899 119889 is the relative degree ofthe system and119892
0is bounded away from zeroThe arguments
of 1198910and 119892
0are real variables In this paper we consider a
simplified formwith119889 = 1Then the SISOnonlinear discrete-time system can be described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (12)
where x119896
= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
] Let us firstassume that there exist exact weightswlowast and klowast the functions
4 Abstract and Applied Analysis
119891[x119896wlowast] and 119892[x
119896 klowast]which can approximate the functions
1198910[x119896] and 119892
0[x119896] without unmodelled dynamicThe nonlin-
ear system (12) can be described by the neural networks asfollows
119910119896+1
= 119891 [x119896wlowast] + 119892 [x
119896 klowast] 119906
119896 (13)
where the functions 119891[sdot sdot] and 119892[sdot sdot] can be selected as OS-ELM neural networks that is
119891 [x119896wlowast] =
119871
sum
119894=1
wlowast119894119866 (a119894 119887119894 x119896)
119892 [x119896 klowast] =
2119871
sum
119894=119871
klowast119894119866 (a119894 119887119894 x119896)
(14)
a119894 119887119894can be generated randomly and kept in constant Then
(13) can be rewritten as
119910119896+1
= Φ119896120579lowast
0 (15)
where Φ119896= [119866(a
1 1198871 x119896) sdot sdot sdot 119866(a
119871 119887119871 x119896)119866(a119871+1
119887119871+1
x119896)119906119896
sdot sdot sdot 119866(a2119871 1198872119871 x119896)119906119896] 120579lowast0= [wlowast klowast]119879
Let w119896and k119896denote the estimates of wlowast and klowast at time
119896 The neural network identification model can be shown asfollows
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896
(16)
that is
119910119896+1
= Φ119896120579119896 (17)
where 120579119896= [
w119896k119896 ] Define 119890
lowast
119896+1as
119890lowast
119896+1= 119910119896+1
minus 119910119896+1
= 119910119896+1
minusΦ119896120579119896 (18)
Referring to the OS-ELM algorithm taking into accountthe existence of time-series in control system the on-lineobservation data comes one by one if we choose119873
119895equiv 1 the
updating ruler can be rewritten as
P119896= P119896minus1
minusP119896minus1Φ119879119896Φ119896P119896minus1
1 +Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ P119896Φ119879
119896119890lowast
119896+1
(19)
where 1205790= 1205790and P
0= (Φ1198790Φ0)minus1 can be obtained in OS-
ELM Step 1 Initialization PhaseDue to the existence of unmodelling dynamics the
nonlinear system (15) can be represented as
119910119896+1
= Φ119896120579lowast
0+ Δ119891 (20)
where Δ119891is the model error and satisfies sup |Δ
119891| le 120576 Then
a deed-zone algorithm will be used
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(21)
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1 (22)
where
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(23)
Theorem 1 Given the system described by (20) if the updatingruler described by (21)ndash(23) we have the following results
(i)10038171003817100381710038171003817120579lowast
0minus 120579119896
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 120579119896minus1
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 1205791
10038171003817100381710038171003817
2
(24)
(ii)
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (25)
and this implies
(a)
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (26)
(b)
lim119896rarrinfin
sup119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (27)
For the detailed proof of Theorem 1 readers can refer to [14]it can also be found in the appendix
32 Adaptive Control Using OS-ELM In this section wediscuss the adaptive control problem using OS-ELM neuralnetworks For the system described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (28)
OS-ELM neural network identification model as
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896 (29)
control 119906119896can be given as below
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
(30)
where 119903119896+1
appears as desired outputThe control119906119896is applied
to both the plant and the neural network model
321 OS-ELM Adaptive Control Algorithm
Step 1 Initialization Phase Given a random input sequence119906119896 where 119896 = 1 119873
0 from the controlled plant (12) we can
get a small chunk of off-line training data 1198780= (x119894 119910119894+1
)1198730
119894=1sub
R119899 timesR1 Given output function 119866(a119894 119887119894 x) and hidden node
number 2119871 which are used to initialize the learning where1198730ge 2119871
Abstract and Applied Analysis 5
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 2119871
(b) Calculate the initial hidden-layer output matrixΦ0
Φ0=[[
[
1206011
1206011198730
]]
]1198730times2119871
(31)
where 120601119894= [119866(a
1 1198871 x119894) sdot sdot sdot 119866(a
1 1198871 x119894)119866(a119871+1
119887119871+1
x119894)119906119894sdot sdot sdot 119866(a
2119871 1198872119871 x119894)119906119894] 119894 = 1 119873
0
(c) Calculate the initial output weight as follows
1205790= P0Φ119879
0Y0 (32)
where P0
= (Φ1198790Φ0)minus1
1205790
= [w0k0 ] and Y
0=
[1199102 sdot sdot sdot 1199101198730+1
]119879
(d) Set 119896 = 1198730+ 1
Step 2 Adaptive Control Phase (a) Calculate the control 119906119896
and the output of the plant 119910119896+1
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896
(33)
where 1205791198730+1
= [w1198730+1k1198730+1 ] ≜ 120579
0 x119896= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
](b) Calculate the partial hidden-layer output matrix Φ
119896
for the 119896th observation data
Φ119896= [119866 (a
1 1198871 x119896) sdot sdot sdot 119866 (a
119871 119887119871 x119896) 119866 (a
119871+1 119887119871+1
x119896) 119906119896
sdot sdot sdot 119866 (a2119871 1198872119871 x119896) 119906119896] (34)
(c) Calculate the output weight
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1
(35)
where P1198730
≜ P0
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(36)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
33 Stability Analysis Define the error between the neuralnetwork identification state and the reference model state as
119890119896+1
= 119910119896+1
minus 119903119896+1
(37)
that is
119890119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896minus 119903119896+1
(38)
By (30) we have
119890119896+1
= 119903119896+1
minus 119903119896+1
= 0 (39)
The control error is defined as
119890119896+1
= 119910119896+1
minus 119903119896+1
(40)
that is
119890119896+1
= 119910119896+1
minus 119910119896+1
+ 119910119896+1
minus 119903119896+1
(41)
so
119890119896+1
= 119890lowast
119896+1+ 119890119896+1
= 119890lowast
119896+1 (42)
Using the properties of the parameter estimation algorithmstated inTheorem 1 we immediately have
lim119896rarrinfin
120590119896[
[
1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (43)
Because the activation function119866(a119894 119887119894 x) is bounded then119891
and119892 are bounded If 119906119896is chosen as (30) then 119906
119896is bounded
henceΦ119896is bounded From (43) we have
lim119896rarrinfin
sup1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (44)
Considering thatΦ119896is bounded then 119890
2
119896+1is bounded From
(40) 119910119896+1
= 119890119896+1
+ 119903119896+1
119903119896is bounded then 119910
119896+1is bounded
tooTo sum up we find that OS-ELM neural networks have
the capability of identification of and controling nonlinearsystems However as we know the conventional adaptivecontrol systems are usually based on a fixed or slowly adaptivemodel It cannot react quickly to abrupt changes and willresult in large transient errors before convergence In thiscase MMAC algorithm is presented as a useful tool Wecan construct multiple OS-ELM neural networks to coverthe uncertainty of the parameters of the plant by initializingOS-ELM neural network respectively in different positionMeanwhile an effective index function can be used to selectthe optimal identification model and the optimal controllerMMACbased onOS-ELMalgorithm can improve the controlproperty of the system greatly and avoid the influence ofjumping parameter on the plant
6 Abstract and Applied Analysis
34 Multiple Model Adaptive Control Multiple model adap-tive control can be regarded as an extension of conventionalindirect adaptive control The control system contains 119872
identification models denoted by 119868119897 119897 isin 1 2 119872
According to (16) and (30) multiple models set can be est-ablished as the following
119868119897 119910119897
119896+1= 119891119897[x119896w119897119896] + 119892119897[x119896 k119897119896] 119906119897
119896 (45)
where
119891119897[x119896w119897119896] =
119871
sum
119894=1
119908119897
119894119866 (a119894 119887119894 x119896)
119892119897[x119896 k119897119896] =
2119871
sum
119894=119871
V119897119894119866 (a119894 119887119894 x119896)
(46)
The control 119906119897119896can be rewritten as
119906119897
119896=
minus119891119897[x119896w119897119896] + 119903119896+1
119892119897 [x119896 k119897119896]
(47)
Define 119890lowast119897119896+1
≜ 119910119896+1
minus119910119897
119896+1 In any instant one of the models 119868
119897
is selected by a switching rule and the corresponding controlinput is used to control the plant Referring toTheorem 1 theindex function has the form
119869119897
119896=
119896
sum
120591=1
120590120591[
[
119890lowast2
120591+1
(1 + 120590120591Φ120591P120591minus1Φ119879120591)2minus 1205762]
]
(48)
At every sample time the model that corresponds to theminimum 119869
119897
119896is chosen that is 119868
119894is chosen if
119869119894
119896= min119897
119869119897
119896 (49)
and 119906119894
119896is used as the control input at that instant The
structure of multiple models adaptive control can be shownin Figure 2
Referring to paper [15] Narendra and Xiang established4 kinds of structure of MMAC (1) All models are fixed (2)all models are adaptive (3) (119872 minus 1) fixed models and oneadaptive model (4) (119872 minus 2) fixed models one free runningadaptivemodel and one reinitialized adaptivemodelWe canget a similar result of the stability For the detailed proof ofstability of MMAC the reader can refer to [15ndash17]
4 Simulation Results
41 Adaptive Control In this section we present resultsof simulations of adaptive control nonlinear discrete-timesystems by using OS-ELM neural networks The nonlinearsystems will be considered as below
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(50)
where we define 119891[119910119896] = (119910
119896lowast(1minus05lowast119865)+05lowast119865)(1+119910
2
119896)
and 119892[119910119896] = minus05 lowast (1 + 119910
119896) 119865 = 01 is a scalar The control
goal is to force the system states to track a reference modeltrajectory We select the reference model as
119903119896= 02 lowast sin(2120587119896
100) (51)
The single-inputsingle-output nonlinear discrete-time sys-tem can be modeled by
119910119896+1
= 119891 [119910119896w119896] + 119892 [119910
119896 k119896] 119906119896 (52)
where 119891 and 119892 are OS-ELM neural networks The neuralnetworks 119891 and 119892 have the same structure alefsym
1501 Based on
the OS-ELM adaptive control algorithm the parameters ofthe neural networks w
119896and k119896are updated to w
119896+1and k119896+1
using OS-ELM AlgorithmForOS-ELM control algorithm in Step 1 we choose119873
0=
300 the control 119906119896is selected randomly from [0 02] A small
chunk of off-line training data is used to calculate the initialoutput weights w
0and k
0 Since ELM algorithm just adjusts
the output weight it shows fast and accurate identificationresults
Following this in Step 2 both identification and controlare implementedThe response of the controlled system witha reference input 119903
119896= sin(2120587119896100) by using OS-ELM
algorithm is shown in Figure 3We can see that the controlledsystem can track the reference model rapidly The controlerror is shown in Figure 4
42 Choice of the Data Set In simulation process we findthat OS-ELM is sensitive to the initial training data Initialtraining data determine the initial value of adaptive controldirectly When we select different initial training samplesthe performance of the control results varies greatly Figure5 shows the control results using OS-ELM neural networkswith different initial training samples
In Figure 6 we select 6 different ranges of random input-output pair We can find that different ranges of input valuesand the corresponding output values concentrate in a singlearea According to the output ranges of the controlled plantwe can choose a specific data area to train the neural networkIn Figure 5 the output range of the controlled system is inthe [minus02 02] If we adopt [0 02] random input (red area inFigure 6(b) initializing OS-ELM by using this set data) thecontrol result is shown in Figure 5(b) On the contrary if therandom input in the other area (Green Blue yellow area andso on in Figure 6(b)) is used to initialize OS-ELM the controlresult is poor (Figure 5(a))
On one hand in the actual production process we havea large number of field data which is distributed over a largerange On the other hand we generally know the objectiveoutput range of the controlled plant Then according to theoutput range of the controlled plant we can choose a specificdata area to train the neural network In this way the accuracyof control can be improved and the computation time can besaved
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Abstract and Applied Analysis
classification [10ndash12] There are only few of published resultsregarding identification and control of nonlinear dynamicsystems In this paper our interest will be kept in the identifi-cation and control of nonlinear dynamic plants by using OS-ELM neural networks The adaptive controller based on OS-ELMneural networkmodel can be constructed In simulationprocess on one hand since the training process of OS-ELM neural network is sensitive to the initial training datadifferent regions of the initial data can cause large diversityof control result Thus large numbers of initial data can beclassified previously into multiple regions Then accordingto the range of expected output of the controlled plant wecan choose the corresponding initial data to initialize OS-ELM neural networks On the other hand multiple modelswitching strategy will be used for the adaptive control byusing OS-ELM neural network when a system with jumpingparameters is controlled Based on the two aspects which aremaintained in the above OS-ELM algorithm is extended tothe field of control
The paper is organized as follows The structure ofELM neural network and OS-ELM algorithm are introducedin Section 2 Then adaptive control of nonlinear dynamicsystem by using OS-ELM is stated in Section 3 Consideringthat OS-ELM is sensitive to the initial training data therelationship between control performance and the selectionof the initial training data is discussed for OS-ELM neuralnetwork furthermore simulation results about OS-ELMcontrol and multiple OS-ELMmodels control are also shownin Section 4 Finally some conclusions are drawn in Section 5
2 ELM Neural Networks
Let us consider standard single-hidden-layer feedforwardneural networks (SLFNs) with 119871 hidden nodes and activationfunction 119866(a
119894 119887119894 x) the structure is shown in Figure 1 Given
119873 arbitrary distinct samples (x119894 t119894)119873
119894=1sub R119899 timesR119898 then the
network can be mathematically modeled as
119891119871 (x) =
119871
sum
119894=1
120573119894h119894 (x) =
119871
sum
119894=1
120573119894119866 (a119894 119887119894 x) (1)
where a119894is the weight vector connecting the 119894th hidden
neuron and the input neurons 120573119894is the weight vector
connecting the 119894th hidden neuron and the output neurons119887119894is the bias of the 119894th hidden neuron and 119866(a
119894 119887119894 x) denotes
the output function of the 119894th hidden node which is boundedTo approximate these 119873 samples (x
119894 t119894)119873
119894=1sub R119899 timesR119898
with zero error is to search for a solution of (a119894 119887119894) and 120573
119894
satisfying
119891119871(x119895) =
119871
sum
119894=1
120573119894119866(a119894 119887119894 x119895) = t119895
119895 = 1 2 119873 (2)
The above119873 equation can be written compactly as
H120573 = T (3)
whereH is the hidden-layer output matrix
H =[[
[
h (x1)
h (x119873)
]]
]
=[[
[
119866 (a1 1198871 x1) sdot sdot sdot 119866 (a
119871 119887119871 x1)
119866 (a1 1198871 x119873) sdot sdot sdot 119866 (a
119871 119887119871 x119873)
]]
]119873times119871
120573 =[[
[
120573119879
1
120573119879
119871
]]
]119871times119898
T =[[
[
t1198791
t119879119873
]]
]119873times119898
(4)
The Essence of ELM The hidden layer of ELM need not beiteratively tuned According to feedforward neural networktheory both the training error H120573 minus T2 and the norm ofweights 120573 need to be minimized [8]
21 Basic ELM The hidden node parameters (a119894 119887119894) remain
fixed after being randomly generatedThe training of a SLFNsis equivalent to find a least-square solution 120573 of the linearsystemH120573 = T that is
10038171003817100381710038171003817H minus T10038171003817100381710038171003817 = min
120573
1003817100381710038171003817H120573 minus T1003817100381710038171003817 (5)
The smallest norm least-squares solution of the above linearsystem is
= H+T (6)
where H+ can be obtained by H+ = (H119879H)minus1H119879 if H119879H is
nonsingular and H+ = H119879(HH119879)minus1 if HH119879 is nonsingular[8] To summarize the learning process can be demonstratedas the following
Basic ELM Algorithm One is given a training set
(x119894 t119894)119873
119894=1sub R119899timesR119898 output function119866(a
119894 119887119894 x) and
hidden node number 119871
Step 1 Randomly assign hidden node parameters (a119894 119887119894) 119894 =
1 119871
Step 2 Calculate the hidden-layer output matrixH
Step 3 Calculate the output weight 120573 using the equation 120573 =H+T where T = [t1 sdot sdot sdot t
119873]119879
22 OS-ELM As mentioned above in Section 21 one of thesolutions of the output weight vector 120573 is
120573 = (H119879H)minus1
H119879T (7)
This algorithm just needs one step learning Indeed basicELM algorithm is a batch learning algorithm However inactual application environment the training data may comeone by one or chunk by Huang et al [8] Therefore Lianget al proposed the OS-ELM [9] algorithm which can adjust
Abstract and Applied Analysis 3
1
1
m
i L
1 n
G(1198341b1x) G(119834i bi x) G(119834LbLx)
f(x)
x
1205731 120573i 120573L
Figure 1 Single-hidden-layer feedforward networks
its output weight 120573 adaptively when new observations arereceived The algorithm can be summarized as follows
OS-ELM Algorithm One is given a training set
119878 = (x119894 t119894) | x119894isin R119899 t
119894isin R119898 119894 = 1 output
function 119866(a119894 119887119894 x) and hidden node number 119871
Step 1 Initialization Phase From the given training set 119878 asmall chunk of training data 119878
0= (x
119894 t119894)1198730
119894=1is used to
initialize the learning where1198730ge 119871
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 119871(b) Calculate the initial hidden-layer output matrixH
0
(c) Calculate the initial output weight 1205730
= P0H1198790T0
where P0= (H1198790H0)minus1 and T
0= [t1 sdot sdot sdot t
1198730]119879
(d) Set 119896 = 0 where 119896 is the number of chunks which istrained currently
Step 2 Sequential Learning Phase (a) Present the (119896 + 1)th
chunk of new observations 119878119896+1
= (x119894 t119894)sum119896+1
119895=0119873119895
119894=(sum119896
119895=0119873119895)+1
here119873119896+1
denotes the number of observations in the (119896 + 1)thchunk
(b) Calculate the partial hidden-layer output matrixH119896+1
for the (119896 + 1)th chunk of data 119878119896+1
H119896+1
=
[[[[
[
119866(a1 1198871 x(sum119896
119895=0119873119895)+1
) sdot sdot sdot 119866 (a119871 119887119871 x(sum119896
119895=0119873119895)+1
)
sdot sdot sdot
119866(a1 1198871 xsum119896+1
119895=0119873119895) sdot sdot sdot 119866 (a
119871 119887119871 xsum119896+1
119895=0119873119895)
]]]]
]
(8)
Set T119896+1
= [t(sum119896
119895=0119873119895)+1
tsum119896+1
119895=0119873119895]119879
(c) Calculate the output weight
P119896+1
= P119896minus P119896H119879119896+1
(I +H119896+1
P119896H119879119896+1
)minus1
H119896+1
P119896
120573119896+1
= 120573119896+ P119896+1
H119879119896+1
(T119896+1
minusH119896+1
120573119896)
(9)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
It can be seen from the aboveOS-ELMalgorithm thatOS-ELM becomes the batch ELMwhen119873
0= 119873 For the detailed
description readers can refer to Liang et al [9] From (9) itcan be seen that theOS-ELM algorithm is similar to recursiveleast-squares (RLS) in [13] Hence all the convergence resultsof RLS can be applied here
3 Adaptive Control by Using OS-ELMNeural Networks
31 Identification Consider the following SISO nonlineardiscrete-time system
119910119896+1
= 1198910 (sdot) + 119892
0 (sdot) 119906119896minus119889+1 (10)
where1198910and 1198920are infinitely differentiable functions defined
on a compact set of 119865 sub R119899+119898 119866 sub R119899+119898 Consider
119910119896minus119899+1
119910119896 119906119896minus119889minus119898+1
119906119896minus119889
(11)
119910 is the output 119906 is the input119898 le 119899 119889 is the relative degree ofthe system and119892
0is bounded away from zeroThe arguments
of 1198910and 119892
0are real variables In this paper we consider a
simplified formwith119889 = 1Then the SISOnonlinear discrete-time system can be described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (12)
where x119896
= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
] Let us firstassume that there exist exact weightswlowast and klowast the functions
4 Abstract and Applied Analysis
119891[x119896wlowast] and 119892[x
119896 klowast]which can approximate the functions
1198910[x119896] and 119892
0[x119896] without unmodelled dynamicThe nonlin-
ear system (12) can be described by the neural networks asfollows
119910119896+1
= 119891 [x119896wlowast] + 119892 [x
119896 klowast] 119906
119896 (13)
where the functions 119891[sdot sdot] and 119892[sdot sdot] can be selected as OS-ELM neural networks that is
119891 [x119896wlowast] =
119871
sum
119894=1
wlowast119894119866 (a119894 119887119894 x119896)
119892 [x119896 klowast] =
2119871
sum
119894=119871
klowast119894119866 (a119894 119887119894 x119896)
(14)
a119894 119887119894can be generated randomly and kept in constant Then
(13) can be rewritten as
119910119896+1
= Φ119896120579lowast
0 (15)
where Φ119896= [119866(a
1 1198871 x119896) sdot sdot sdot 119866(a
119871 119887119871 x119896)119866(a119871+1
119887119871+1
x119896)119906119896
sdot sdot sdot 119866(a2119871 1198872119871 x119896)119906119896] 120579lowast0= [wlowast klowast]119879
Let w119896and k119896denote the estimates of wlowast and klowast at time
119896 The neural network identification model can be shown asfollows
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896
(16)
that is
119910119896+1
= Φ119896120579119896 (17)
where 120579119896= [
w119896k119896 ] Define 119890
lowast
119896+1as
119890lowast
119896+1= 119910119896+1
minus 119910119896+1
= 119910119896+1
minusΦ119896120579119896 (18)
Referring to the OS-ELM algorithm taking into accountthe existence of time-series in control system the on-lineobservation data comes one by one if we choose119873
119895equiv 1 the
updating ruler can be rewritten as
P119896= P119896minus1
minusP119896minus1Φ119879119896Φ119896P119896minus1
1 +Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ P119896Φ119879
119896119890lowast
119896+1
(19)
where 1205790= 1205790and P
0= (Φ1198790Φ0)minus1 can be obtained in OS-
ELM Step 1 Initialization PhaseDue to the existence of unmodelling dynamics the
nonlinear system (15) can be represented as
119910119896+1
= Φ119896120579lowast
0+ Δ119891 (20)
where Δ119891is the model error and satisfies sup |Δ
119891| le 120576 Then
a deed-zone algorithm will be used
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(21)
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1 (22)
where
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(23)
Theorem 1 Given the system described by (20) if the updatingruler described by (21)ndash(23) we have the following results
(i)10038171003817100381710038171003817120579lowast
0minus 120579119896
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 120579119896minus1
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 1205791
10038171003817100381710038171003817
2
(24)
(ii)
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (25)
and this implies
(a)
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (26)
(b)
lim119896rarrinfin
sup119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (27)
For the detailed proof of Theorem 1 readers can refer to [14]it can also be found in the appendix
32 Adaptive Control Using OS-ELM In this section wediscuss the adaptive control problem using OS-ELM neuralnetworks For the system described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (28)
OS-ELM neural network identification model as
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896 (29)
control 119906119896can be given as below
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
(30)
where 119903119896+1
appears as desired outputThe control119906119896is applied
to both the plant and the neural network model
321 OS-ELM Adaptive Control Algorithm
Step 1 Initialization Phase Given a random input sequence119906119896 where 119896 = 1 119873
0 from the controlled plant (12) we can
get a small chunk of off-line training data 1198780= (x119894 119910119894+1
)1198730
119894=1sub
R119899 timesR1 Given output function 119866(a119894 119887119894 x) and hidden node
number 2119871 which are used to initialize the learning where1198730ge 2119871
Abstract and Applied Analysis 5
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 2119871
(b) Calculate the initial hidden-layer output matrixΦ0
Φ0=[[
[
1206011
1206011198730
]]
]1198730times2119871
(31)
where 120601119894= [119866(a
1 1198871 x119894) sdot sdot sdot 119866(a
1 1198871 x119894)119866(a119871+1
119887119871+1
x119894)119906119894sdot sdot sdot 119866(a
2119871 1198872119871 x119894)119906119894] 119894 = 1 119873
0
(c) Calculate the initial output weight as follows
1205790= P0Φ119879
0Y0 (32)
where P0
= (Φ1198790Φ0)minus1
1205790
= [w0k0 ] and Y
0=
[1199102 sdot sdot sdot 1199101198730+1
]119879
(d) Set 119896 = 1198730+ 1
Step 2 Adaptive Control Phase (a) Calculate the control 119906119896
and the output of the plant 119910119896+1
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896
(33)
where 1205791198730+1
= [w1198730+1k1198730+1 ] ≜ 120579
0 x119896= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
](b) Calculate the partial hidden-layer output matrix Φ
119896
for the 119896th observation data
Φ119896= [119866 (a
1 1198871 x119896) sdot sdot sdot 119866 (a
119871 119887119871 x119896) 119866 (a
119871+1 119887119871+1
x119896) 119906119896
sdot sdot sdot 119866 (a2119871 1198872119871 x119896) 119906119896] (34)
(c) Calculate the output weight
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1
(35)
where P1198730
≜ P0
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(36)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
33 Stability Analysis Define the error between the neuralnetwork identification state and the reference model state as
119890119896+1
= 119910119896+1
minus 119903119896+1
(37)
that is
119890119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896minus 119903119896+1
(38)
By (30) we have
119890119896+1
= 119903119896+1
minus 119903119896+1
= 0 (39)
The control error is defined as
119890119896+1
= 119910119896+1
minus 119903119896+1
(40)
that is
119890119896+1
= 119910119896+1
minus 119910119896+1
+ 119910119896+1
minus 119903119896+1
(41)
so
119890119896+1
= 119890lowast
119896+1+ 119890119896+1
= 119890lowast
119896+1 (42)
Using the properties of the parameter estimation algorithmstated inTheorem 1 we immediately have
lim119896rarrinfin
120590119896[
[
1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (43)
Because the activation function119866(a119894 119887119894 x) is bounded then119891
and119892 are bounded If 119906119896is chosen as (30) then 119906
119896is bounded
henceΦ119896is bounded From (43) we have
lim119896rarrinfin
sup1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (44)
Considering thatΦ119896is bounded then 119890
2
119896+1is bounded From
(40) 119910119896+1
= 119890119896+1
+ 119903119896+1
119903119896is bounded then 119910
119896+1is bounded
tooTo sum up we find that OS-ELM neural networks have
the capability of identification of and controling nonlinearsystems However as we know the conventional adaptivecontrol systems are usually based on a fixed or slowly adaptivemodel It cannot react quickly to abrupt changes and willresult in large transient errors before convergence In thiscase MMAC algorithm is presented as a useful tool Wecan construct multiple OS-ELM neural networks to coverthe uncertainty of the parameters of the plant by initializingOS-ELM neural network respectively in different positionMeanwhile an effective index function can be used to selectthe optimal identification model and the optimal controllerMMACbased onOS-ELMalgorithm can improve the controlproperty of the system greatly and avoid the influence ofjumping parameter on the plant
6 Abstract and Applied Analysis
34 Multiple Model Adaptive Control Multiple model adap-tive control can be regarded as an extension of conventionalindirect adaptive control The control system contains 119872
identification models denoted by 119868119897 119897 isin 1 2 119872
According to (16) and (30) multiple models set can be est-ablished as the following
119868119897 119910119897
119896+1= 119891119897[x119896w119897119896] + 119892119897[x119896 k119897119896] 119906119897
119896 (45)
where
119891119897[x119896w119897119896] =
119871
sum
119894=1
119908119897
119894119866 (a119894 119887119894 x119896)
119892119897[x119896 k119897119896] =
2119871
sum
119894=119871
V119897119894119866 (a119894 119887119894 x119896)
(46)
The control 119906119897119896can be rewritten as
119906119897
119896=
minus119891119897[x119896w119897119896] + 119903119896+1
119892119897 [x119896 k119897119896]
(47)
Define 119890lowast119897119896+1
≜ 119910119896+1
minus119910119897
119896+1 In any instant one of the models 119868
119897
is selected by a switching rule and the corresponding controlinput is used to control the plant Referring toTheorem 1 theindex function has the form
119869119897
119896=
119896
sum
120591=1
120590120591[
[
119890lowast2
120591+1
(1 + 120590120591Φ120591P120591minus1Φ119879120591)2minus 1205762]
]
(48)
At every sample time the model that corresponds to theminimum 119869
119897
119896is chosen that is 119868
119894is chosen if
119869119894
119896= min119897
119869119897
119896 (49)
and 119906119894
119896is used as the control input at that instant The
structure of multiple models adaptive control can be shownin Figure 2
Referring to paper [15] Narendra and Xiang established4 kinds of structure of MMAC (1) All models are fixed (2)all models are adaptive (3) (119872 minus 1) fixed models and oneadaptive model (4) (119872 minus 2) fixed models one free runningadaptivemodel and one reinitialized adaptivemodelWe canget a similar result of the stability For the detailed proof ofstability of MMAC the reader can refer to [15ndash17]
4 Simulation Results
41 Adaptive Control In this section we present resultsof simulations of adaptive control nonlinear discrete-timesystems by using OS-ELM neural networks The nonlinearsystems will be considered as below
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(50)
where we define 119891[119910119896] = (119910
119896lowast(1minus05lowast119865)+05lowast119865)(1+119910
2
119896)
and 119892[119910119896] = minus05 lowast (1 + 119910
119896) 119865 = 01 is a scalar The control
goal is to force the system states to track a reference modeltrajectory We select the reference model as
119903119896= 02 lowast sin(2120587119896
100) (51)
The single-inputsingle-output nonlinear discrete-time sys-tem can be modeled by
119910119896+1
= 119891 [119910119896w119896] + 119892 [119910
119896 k119896] 119906119896 (52)
where 119891 and 119892 are OS-ELM neural networks The neuralnetworks 119891 and 119892 have the same structure alefsym
1501 Based on
the OS-ELM adaptive control algorithm the parameters ofthe neural networks w
119896and k119896are updated to w
119896+1and k119896+1
using OS-ELM AlgorithmForOS-ELM control algorithm in Step 1 we choose119873
0=
300 the control 119906119896is selected randomly from [0 02] A small
chunk of off-line training data is used to calculate the initialoutput weights w
0and k
0 Since ELM algorithm just adjusts
the output weight it shows fast and accurate identificationresults
Following this in Step 2 both identification and controlare implementedThe response of the controlled system witha reference input 119903
119896= sin(2120587119896100) by using OS-ELM
algorithm is shown in Figure 3We can see that the controlledsystem can track the reference model rapidly The controlerror is shown in Figure 4
42 Choice of the Data Set In simulation process we findthat OS-ELM is sensitive to the initial training data Initialtraining data determine the initial value of adaptive controldirectly When we select different initial training samplesthe performance of the control results varies greatly Figure5 shows the control results using OS-ELM neural networkswith different initial training samples
In Figure 6 we select 6 different ranges of random input-output pair We can find that different ranges of input valuesand the corresponding output values concentrate in a singlearea According to the output ranges of the controlled plantwe can choose a specific data area to train the neural networkIn Figure 5 the output range of the controlled system is inthe [minus02 02] If we adopt [0 02] random input (red area inFigure 6(b) initializing OS-ELM by using this set data) thecontrol result is shown in Figure 5(b) On the contrary if therandom input in the other area (Green Blue yellow area andso on in Figure 6(b)) is used to initialize OS-ELM the controlresult is poor (Figure 5(a))
On one hand in the actual production process we havea large number of field data which is distributed over a largerange On the other hand we generally know the objectiveoutput range of the controlled plant Then according to theoutput range of the controlled plant we can choose a specificdata area to train the neural network In this way the accuracyof control can be improved and the computation time can besaved
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 3
1
1
m
i L
1 n
G(1198341b1x) G(119834i bi x) G(119834LbLx)
f(x)
x
1205731 120573i 120573L
Figure 1 Single-hidden-layer feedforward networks
its output weight 120573 adaptively when new observations arereceived The algorithm can be summarized as follows
OS-ELM Algorithm One is given a training set
119878 = (x119894 t119894) | x119894isin R119899 t
119894isin R119898 119894 = 1 output
function 119866(a119894 119887119894 x) and hidden node number 119871
Step 1 Initialization Phase From the given training set 119878 asmall chunk of training data 119878
0= (x
119894 t119894)1198730
119894=1is used to
initialize the learning where1198730ge 119871
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 119871(b) Calculate the initial hidden-layer output matrixH
0
(c) Calculate the initial output weight 1205730
= P0H1198790T0
where P0= (H1198790H0)minus1 and T
0= [t1 sdot sdot sdot t
1198730]119879
(d) Set 119896 = 0 where 119896 is the number of chunks which istrained currently
Step 2 Sequential Learning Phase (a) Present the (119896 + 1)th
chunk of new observations 119878119896+1
= (x119894 t119894)sum119896+1
119895=0119873119895
119894=(sum119896
119895=0119873119895)+1
here119873119896+1
denotes the number of observations in the (119896 + 1)thchunk
(b) Calculate the partial hidden-layer output matrixH119896+1
for the (119896 + 1)th chunk of data 119878119896+1
H119896+1
=
[[[[
[
119866(a1 1198871 x(sum119896
119895=0119873119895)+1
) sdot sdot sdot 119866 (a119871 119887119871 x(sum119896
119895=0119873119895)+1
)
sdot sdot sdot
119866(a1 1198871 xsum119896+1
119895=0119873119895) sdot sdot sdot 119866 (a
119871 119887119871 xsum119896+1
119895=0119873119895)
]]]]
]
(8)
Set T119896+1
= [t(sum119896
119895=0119873119895)+1
tsum119896+1
119895=0119873119895]119879
(c) Calculate the output weight
P119896+1
= P119896minus P119896H119879119896+1
(I +H119896+1
P119896H119879119896+1
)minus1
H119896+1
P119896
120573119896+1
= 120573119896+ P119896+1
H119879119896+1
(T119896+1
minusH119896+1
120573119896)
(9)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
It can be seen from the aboveOS-ELMalgorithm thatOS-ELM becomes the batch ELMwhen119873
0= 119873 For the detailed
description readers can refer to Liang et al [9] From (9) itcan be seen that theOS-ELM algorithm is similar to recursiveleast-squares (RLS) in [13] Hence all the convergence resultsof RLS can be applied here
3 Adaptive Control by Using OS-ELMNeural Networks
31 Identification Consider the following SISO nonlineardiscrete-time system
119910119896+1
= 1198910 (sdot) + 119892
0 (sdot) 119906119896minus119889+1 (10)
where1198910and 1198920are infinitely differentiable functions defined
on a compact set of 119865 sub R119899+119898 119866 sub R119899+119898 Consider
119910119896minus119899+1
119910119896 119906119896minus119889minus119898+1
119906119896minus119889
(11)
119910 is the output 119906 is the input119898 le 119899 119889 is the relative degree ofthe system and119892
0is bounded away from zeroThe arguments
of 1198910and 119892
0are real variables In this paper we consider a
simplified formwith119889 = 1Then the SISOnonlinear discrete-time system can be described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (12)
where x119896
= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
] Let us firstassume that there exist exact weightswlowast and klowast the functions
4 Abstract and Applied Analysis
119891[x119896wlowast] and 119892[x
119896 klowast]which can approximate the functions
1198910[x119896] and 119892
0[x119896] without unmodelled dynamicThe nonlin-
ear system (12) can be described by the neural networks asfollows
119910119896+1
= 119891 [x119896wlowast] + 119892 [x
119896 klowast] 119906
119896 (13)
where the functions 119891[sdot sdot] and 119892[sdot sdot] can be selected as OS-ELM neural networks that is
119891 [x119896wlowast] =
119871
sum
119894=1
wlowast119894119866 (a119894 119887119894 x119896)
119892 [x119896 klowast] =
2119871
sum
119894=119871
klowast119894119866 (a119894 119887119894 x119896)
(14)
a119894 119887119894can be generated randomly and kept in constant Then
(13) can be rewritten as
119910119896+1
= Φ119896120579lowast
0 (15)
where Φ119896= [119866(a
1 1198871 x119896) sdot sdot sdot 119866(a
119871 119887119871 x119896)119866(a119871+1
119887119871+1
x119896)119906119896
sdot sdot sdot 119866(a2119871 1198872119871 x119896)119906119896] 120579lowast0= [wlowast klowast]119879
Let w119896and k119896denote the estimates of wlowast and klowast at time
119896 The neural network identification model can be shown asfollows
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896
(16)
that is
119910119896+1
= Φ119896120579119896 (17)
where 120579119896= [
w119896k119896 ] Define 119890
lowast
119896+1as
119890lowast
119896+1= 119910119896+1
minus 119910119896+1
= 119910119896+1
minusΦ119896120579119896 (18)
Referring to the OS-ELM algorithm taking into accountthe existence of time-series in control system the on-lineobservation data comes one by one if we choose119873
119895equiv 1 the
updating ruler can be rewritten as
P119896= P119896minus1
minusP119896minus1Φ119879119896Φ119896P119896minus1
1 +Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ P119896Φ119879
119896119890lowast
119896+1
(19)
where 1205790= 1205790and P
0= (Φ1198790Φ0)minus1 can be obtained in OS-
ELM Step 1 Initialization PhaseDue to the existence of unmodelling dynamics the
nonlinear system (15) can be represented as
119910119896+1
= Φ119896120579lowast
0+ Δ119891 (20)
where Δ119891is the model error and satisfies sup |Δ
119891| le 120576 Then
a deed-zone algorithm will be used
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(21)
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1 (22)
where
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(23)
Theorem 1 Given the system described by (20) if the updatingruler described by (21)ndash(23) we have the following results
(i)10038171003817100381710038171003817120579lowast
0minus 120579119896
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 120579119896minus1
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 1205791
10038171003817100381710038171003817
2
(24)
(ii)
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (25)
and this implies
(a)
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (26)
(b)
lim119896rarrinfin
sup119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (27)
For the detailed proof of Theorem 1 readers can refer to [14]it can also be found in the appendix
32 Adaptive Control Using OS-ELM In this section wediscuss the adaptive control problem using OS-ELM neuralnetworks For the system described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (28)
OS-ELM neural network identification model as
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896 (29)
control 119906119896can be given as below
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
(30)
where 119903119896+1
appears as desired outputThe control119906119896is applied
to both the plant and the neural network model
321 OS-ELM Adaptive Control Algorithm
Step 1 Initialization Phase Given a random input sequence119906119896 where 119896 = 1 119873
0 from the controlled plant (12) we can
get a small chunk of off-line training data 1198780= (x119894 119910119894+1
)1198730
119894=1sub
R119899 timesR1 Given output function 119866(a119894 119887119894 x) and hidden node
number 2119871 which are used to initialize the learning where1198730ge 2119871
Abstract and Applied Analysis 5
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 2119871
(b) Calculate the initial hidden-layer output matrixΦ0
Φ0=[[
[
1206011
1206011198730
]]
]1198730times2119871
(31)
where 120601119894= [119866(a
1 1198871 x119894) sdot sdot sdot 119866(a
1 1198871 x119894)119866(a119871+1
119887119871+1
x119894)119906119894sdot sdot sdot 119866(a
2119871 1198872119871 x119894)119906119894] 119894 = 1 119873
0
(c) Calculate the initial output weight as follows
1205790= P0Φ119879
0Y0 (32)
where P0
= (Φ1198790Φ0)minus1
1205790
= [w0k0 ] and Y
0=
[1199102 sdot sdot sdot 1199101198730+1
]119879
(d) Set 119896 = 1198730+ 1
Step 2 Adaptive Control Phase (a) Calculate the control 119906119896
and the output of the plant 119910119896+1
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896
(33)
where 1205791198730+1
= [w1198730+1k1198730+1 ] ≜ 120579
0 x119896= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
](b) Calculate the partial hidden-layer output matrix Φ
119896
for the 119896th observation data
Φ119896= [119866 (a
1 1198871 x119896) sdot sdot sdot 119866 (a
119871 119887119871 x119896) 119866 (a
119871+1 119887119871+1
x119896) 119906119896
sdot sdot sdot 119866 (a2119871 1198872119871 x119896) 119906119896] (34)
(c) Calculate the output weight
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1
(35)
where P1198730
≜ P0
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(36)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
33 Stability Analysis Define the error between the neuralnetwork identification state and the reference model state as
119890119896+1
= 119910119896+1
minus 119903119896+1
(37)
that is
119890119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896minus 119903119896+1
(38)
By (30) we have
119890119896+1
= 119903119896+1
minus 119903119896+1
= 0 (39)
The control error is defined as
119890119896+1
= 119910119896+1
minus 119903119896+1
(40)
that is
119890119896+1
= 119910119896+1
minus 119910119896+1
+ 119910119896+1
minus 119903119896+1
(41)
so
119890119896+1
= 119890lowast
119896+1+ 119890119896+1
= 119890lowast
119896+1 (42)
Using the properties of the parameter estimation algorithmstated inTheorem 1 we immediately have
lim119896rarrinfin
120590119896[
[
1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (43)
Because the activation function119866(a119894 119887119894 x) is bounded then119891
and119892 are bounded If 119906119896is chosen as (30) then 119906
119896is bounded
henceΦ119896is bounded From (43) we have
lim119896rarrinfin
sup1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (44)
Considering thatΦ119896is bounded then 119890
2
119896+1is bounded From
(40) 119910119896+1
= 119890119896+1
+ 119903119896+1
119903119896is bounded then 119910
119896+1is bounded
tooTo sum up we find that OS-ELM neural networks have
the capability of identification of and controling nonlinearsystems However as we know the conventional adaptivecontrol systems are usually based on a fixed or slowly adaptivemodel It cannot react quickly to abrupt changes and willresult in large transient errors before convergence In thiscase MMAC algorithm is presented as a useful tool Wecan construct multiple OS-ELM neural networks to coverthe uncertainty of the parameters of the plant by initializingOS-ELM neural network respectively in different positionMeanwhile an effective index function can be used to selectthe optimal identification model and the optimal controllerMMACbased onOS-ELMalgorithm can improve the controlproperty of the system greatly and avoid the influence ofjumping parameter on the plant
6 Abstract and Applied Analysis
34 Multiple Model Adaptive Control Multiple model adap-tive control can be regarded as an extension of conventionalindirect adaptive control The control system contains 119872
identification models denoted by 119868119897 119897 isin 1 2 119872
According to (16) and (30) multiple models set can be est-ablished as the following
119868119897 119910119897
119896+1= 119891119897[x119896w119897119896] + 119892119897[x119896 k119897119896] 119906119897
119896 (45)
where
119891119897[x119896w119897119896] =
119871
sum
119894=1
119908119897
119894119866 (a119894 119887119894 x119896)
119892119897[x119896 k119897119896] =
2119871
sum
119894=119871
V119897119894119866 (a119894 119887119894 x119896)
(46)
The control 119906119897119896can be rewritten as
119906119897
119896=
minus119891119897[x119896w119897119896] + 119903119896+1
119892119897 [x119896 k119897119896]
(47)
Define 119890lowast119897119896+1
≜ 119910119896+1
minus119910119897
119896+1 In any instant one of the models 119868
119897
is selected by a switching rule and the corresponding controlinput is used to control the plant Referring toTheorem 1 theindex function has the form
119869119897
119896=
119896
sum
120591=1
120590120591[
[
119890lowast2
120591+1
(1 + 120590120591Φ120591P120591minus1Φ119879120591)2minus 1205762]
]
(48)
At every sample time the model that corresponds to theminimum 119869
119897
119896is chosen that is 119868
119894is chosen if
119869119894
119896= min119897
119869119897
119896 (49)
and 119906119894
119896is used as the control input at that instant The
structure of multiple models adaptive control can be shownin Figure 2
Referring to paper [15] Narendra and Xiang established4 kinds of structure of MMAC (1) All models are fixed (2)all models are adaptive (3) (119872 minus 1) fixed models and oneadaptive model (4) (119872 minus 2) fixed models one free runningadaptivemodel and one reinitialized adaptivemodelWe canget a similar result of the stability For the detailed proof ofstability of MMAC the reader can refer to [15ndash17]
4 Simulation Results
41 Adaptive Control In this section we present resultsof simulations of adaptive control nonlinear discrete-timesystems by using OS-ELM neural networks The nonlinearsystems will be considered as below
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(50)
where we define 119891[119910119896] = (119910
119896lowast(1minus05lowast119865)+05lowast119865)(1+119910
2
119896)
and 119892[119910119896] = minus05 lowast (1 + 119910
119896) 119865 = 01 is a scalar The control
goal is to force the system states to track a reference modeltrajectory We select the reference model as
119903119896= 02 lowast sin(2120587119896
100) (51)
The single-inputsingle-output nonlinear discrete-time sys-tem can be modeled by
119910119896+1
= 119891 [119910119896w119896] + 119892 [119910
119896 k119896] 119906119896 (52)
where 119891 and 119892 are OS-ELM neural networks The neuralnetworks 119891 and 119892 have the same structure alefsym
1501 Based on
the OS-ELM adaptive control algorithm the parameters ofthe neural networks w
119896and k119896are updated to w
119896+1and k119896+1
using OS-ELM AlgorithmForOS-ELM control algorithm in Step 1 we choose119873
0=
300 the control 119906119896is selected randomly from [0 02] A small
chunk of off-line training data is used to calculate the initialoutput weights w
0and k
0 Since ELM algorithm just adjusts
the output weight it shows fast and accurate identificationresults
Following this in Step 2 both identification and controlare implementedThe response of the controlled system witha reference input 119903
119896= sin(2120587119896100) by using OS-ELM
algorithm is shown in Figure 3We can see that the controlledsystem can track the reference model rapidly The controlerror is shown in Figure 4
42 Choice of the Data Set In simulation process we findthat OS-ELM is sensitive to the initial training data Initialtraining data determine the initial value of adaptive controldirectly When we select different initial training samplesthe performance of the control results varies greatly Figure5 shows the control results using OS-ELM neural networkswith different initial training samples
In Figure 6 we select 6 different ranges of random input-output pair We can find that different ranges of input valuesand the corresponding output values concentrate in a singlearea According to the output ranges of the controlled plantwe can choose a specific data area to train the neural networkIn Figure 5 the output range of the controlled system is inthe [minus02 02] If we adopt [0 02] random input (red area inFigure 6(b) initializing OS-ELM by using this set data) thecontrol result is shown in Figure 5(b) On the contrary if therandom input in the other area (Green Blue yellow area andso on in Figure 6(b)) is used to initialize OS-ELM the controlresult is poor (Figure 5(a))
On one hand in the actual production process we havea large number of field data which is distributed over a largerange On the other hand we generally know the objectiveoutput range of the controlled plant Then according to theoutput range of the controlled plant we can choose a specificdata area to train the neural network In this way the accuracyof control can be improved and the computation time can besaved
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Abstract and Applied Analysis
119891[x119896wlowast] and 119892[x
119896 klowast]which can approximate the functions
1198910[x119896] and 119892
0[x119896] without unmodelled dynamicThe nonlin-
ear system (12) can be described by the neural networks asfollows
119910119896+1
= 119891 [x119896wlowast] + 119892 [x
119896 klowast] 119906
119896 (13)
where the functions 119891[sdot sdot] and 119892[sdot sdot] can be selected as OS-ELM neural networks that is
119891 [x119896wlowast] =
119871
sum
119894=1
wlowast119894119866 (a119894 119887119894 x119896)
119892 [x119896 klowast] =
2119871
sum
119894=119871
klowast119894119866 (a119894 119887119894 x119896)
(14)
a119894 119887119894can be generated randomly and kept in constant Then
(13) can be rewritten as
119910119896+1
= Φ119896120579lowast
0 (15)
where Φ119896= [119866(a
1 1198871 x119896) sdot sdot sdot 119866(a
119871 119887119871 x119896)119866(a119871+1
119887119871+1
x119896)119906119896
sdot sdot sdot 119866(a2119871 1198872119871 x119896)119906119896] 120579lowast0= [wlowast klowast]119879
Let w119896and k119896denote the estimates of wlowast and klowast at time
119896 The neural network identification model can be shown asfollows
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896
(16)
that is
119910119896+1
= Φ119896120579119896 (17)
where 120579119896= [
w119896k119896 ] Define 119890
lowast
119896+1as
119890lowast
119896+1= 119910119896+1
minus 119910119896+1
= 119910119896+1
minusΦ119896120579119896 (18)
Referring to the OS-ELM algorithm taking into accountthe existence of time-series in control system the on-lineobservation data comes one by one if we choose119873
119895equiv 1 the
updating ruler can be rewritten as
P119896= P119896minus1
minusP119896minus1Φ119879119896Φ119896P119896minus1
1 +Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ P119896Φ119879
119896119890lowast
119896+1
(19)
where 1205790= 1205790and P
0= (Φ1198790Φ0)minus1 can be obtained in OS-
ELM Step 1 Initialization PhaseDue to the existence of unmodelling dynamics the
nonlinear system (15) can be represented as
119910119896+1
= Φ119896120579lowast
0+ Δ119891 (20)
where Δ119891is the model error and satisfies sup |Δ
119891| le 120576 Then
a deed-zone algorithm will be used
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(21)
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1 (22)
where
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(23)
Theorem 1 Given the system described by (20) if the updatingruler described by (21)ndash(23) we have the following results
(i)10038171003817100381710038171003817120579lowast
0minus 120579119896
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 120579119896minus1
10038171003817100381710038171003817
2
le10038171003817100381710038171003817120579lowast
0minus 1205791
10038171003817100381710038171003817
2
(24)
(ii)
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (25)
and this implies
(a)
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (26)
(b)
lim119896rarrinfin
sup119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (27)
For the detailed proof of Theorem 1 readers can refer to [14]it can also be found in the appendix
32 Adaptive Control Using OS-ELM In this section wediscuss the adaptive control problem using OS-ELM neuralnetworks For the system described as
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896 (28)
OS-ELM neural network identification model as
119910119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896 (29)
control 119906119896can be given as below
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
(30)
where 119903119896+1
appears as desired outputThe control119906119896is applied
to both the plant and the neural network model
321 OS-ELM Adaptive Control Algorithm
Step 1 Initialization Phase Given a random input sequence119906119896 where 119896 = 1 119873
0 from the controlled plant (12) we can
get a small chunk of off-line training data 1198780= (x119894 119910119894+1
)1198730
119894=1sub
R119899 timesR1 Given output function 119866(a119894 119887119894 x) and hidden node
number 2119871 which are used to initialize the learning where1198730ge 2119871
Abstract and Applied Analysis 5
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 2119871
(b) Calculate the initial hidden-layer output matrixΦ0
Φ0=[[
[
1206011
1206011198730
]]
]1198730times2119871
(31)
where 120601119894= [119866(a
1 1198871 x119894) sdot sdot sdot 119866(a
1 1198871 x119894)119866(a119871+1
119887119871+1
x119894)119906119894sdot sdot sdot 119866(a
2119871 1198872119871 x119894)119906119894] 119894 = 1 119873
0
(c) Calculate the initial output weight as follows
1205790= P0Φ119879
0Y0 (32)
where P0
= (Φ1198790Φ0)minus1
1205790
= [w0k0 ] and Y
0=
[1199102 sdot sdot sdot 1199101198730+1
]119879
(d) Set 119896 = 1198730+ 1
Step 2 Adaptive Control Phase (a) Calculate the control 119906119896
and the output of the plant 119910119896+1
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896
(33)
where 1205791198730+1
= [w1198730+1k1198730+1 ] ≜ 120579
0 x119896= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
](b) Calculate the partial hidden-layer output matrix Φ
119896
for the 119896th observation data
Φ119896= [119866 (a
1 1198871 x119896) sdot sdot sdot 119866 (a
119871 119887119871 x119896) 119866 (a
119871+1 119887119871+1
x119896) 119906119896
sdot sdot sdot 119866 (a2119871 1198872119871 x119896) 119906119896] (34)
(c) Calculate the output weight
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1
(35)
where P1198730
≜ P0
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(36)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
33 Stability Analysis Define the error between the neuralnetwork identification state and the reference model state as
119890119896+1
= 119910119896+1
minus 119903119896+1
(37)
that is
119890119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896minus 119903119896+1
(38)
By (30) we have
119890119896+1
= 119903119896+1
minus 119903119896+1
= 0 (39)
The control error is defined as
119890119896+1
= 119910119896+1
minus 119903119896+1
(40)
that is
119890119896+1
= 119910119896+1
minus 119910119896+1
+ 119910119896+1
minus 119903119896+1
(41)
so
119890119896+1
= 119890lowast
119896+1+ 119890119896+1
= 119890lowast
119896+1 (42)
Using the properties of the parameter estimation algorithmstated inTheorem 1 we immediately have
lim119896rarrinfin
120590119896[
[
1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (43)
Because the activation function119866(a119894 119887119894 x) is bounded then119891
and119892 are bounded If 119906119896is chosen as (30) then 119906
119896is bounded
henceΦ119896is bounded From (43) we have
lim119896rarrinfin
sup1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (44)
Considering thatΦ119896is bounded then 119890
2
119896+1is bounded From
(40) 119910119896+1
= 119890119896+1
+ 119903119896+1
119903119896is bounded then 119910
119896+1is bounded
tooTo sum up we find that OS-ELM neural networks have
the capability of identification of and controling nonlinearsystems However as we know the conventional adaptivecontrol systems are usually based on a fixed or slowly adaptivemodel It cannot react quickly to abrupt changes and willresult in large transient errors before convergence In thiscase MMAC algorithm is presented as a useful tool Wecan construct multiple OS-ELM neural networks to coverthe uncertainty of the parameters of the plant by initializingOS-ELM neural network respectively in different positionMeanwhile an effective index function can be used to selectthe optimal identification model and the optimal controllerMMACbased onOS-ELMalgorithm can improve the controlproperty of the system greatly and avoid the influence ofjumping parameter on the plant
6 Abstract and Applied Analysis
34 Multiple Model Adaptive Control Multiple model adap-tive control can be regarded as an extension of conventionalindirect adaptive control The control system contains 119872
identification models denoted by 119868119897 119897 isin 1 2 119872
According to (16) and (30) multiple models set can be est-ablished as the following
119868119897 119910119897
119896+1= 119891119897[x119896w119897119896] + 119892119897[x119896 k119897119896] 119906119897
119896 (45)
where
119891119897[x119896w119897119896] =
119871
sum
119894=1
119908119897
119894119866 (a119894 119887119894 x119896)
119892119897[x119896 k119897119896] =
2119871
sum
119894=119871
V119897119894119866 (a119894 119887119894 x119896)
(46)
The control 119906119897119896can be rewritten as
119906119897
119896=
minus119891119897[x119896w119897119896] + 119903119896+1
119892119897 [x119896 k119897119896]
(47)
Define 119890lowast119897119896+1
≜ 119910119896+1
minus119910119897
119896+1 In any instant one of the models 119868
119897
is selected by a switching rule and the corresponding controlinput is used to control the plant Referring toTheorem 1 theindex function has the form
119869119897
119896=
119896
sum
120591=1
120590120591[
[
119890lowast2
120591+1
(1 + 120590120591Φ120591P120591minus1Φ119879120591)2minus 1205762]
]
(48)
At every sample time the model that corresponds to theminimum 119869
119897
119896is chosen that is 119868
119894is chosen if
119869119894
119896= min119897
119869119897
119896 (49)
and 119906119894
119896is used as the control input at that instant The
structure of multiple models adaptive control can be shownin Figure 2
Referring to paper [15] Narendra and Xiang established4 kinds of structure of MMAC (1) All models are fixed (2)all models are adaptive (3) (119872 minus 1) fixed models and oneadaptive model (4) (119872 minus 2) fixed models one free runningadaptivemodel and one reinitialized adaptivemodelWe canget a similar result of the stability For the detailed proof ofstability of MMAC the reader can refer to [15ndash17]
4 Simulation Results
41 Adaptive Control In this section we present resultsof simulations of adaptive control nonlinear discrete-timesystems by using OS-ELM neural networks The nonlinearsystems will be considered as below
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(50)
where we define 119891[119910119896] = (119910
119896lowast(1minus05lowast119865)+05lowast119865)(1+119910
2
119896)
and 119892[119910119896] = minus05 lowast (1 + 119910
119896) 119865 = 01 is a scalar The control
goal is to force the system states to track a reference modeltrajectory We select the reference model as
119903119896= 02 lowast sin(2120587119896
100) (51)
The single-inputsingle-output nonlinear discrete-time sys-tem can be modeled by
119910119896+1
= 119891 [119910119896w119896] + 119892 [119910
119896 k119896] 119906119896 (52)
where 119891 and 119892 are OS-ELM neural networks The neuralnetworks 119891 and 119892 have the same structure alefsym
1501 Based on
the OS-ELM adaptive control algorithm the parameters ofthe neural networks w
119896and k119896are updated to w
119896+1and k119896+1
using OS-ELM AlgorithmForOS-ELM control algorithm in Step 1 we choose119873
0=
300 the control 119906119896is selected randomly from [0 02] A small
chunk of off-line training data is used to calculate the initialoutput weights w
0and k
0 Since ELM algorithm just adjusts
the output weight it shows fast and accurate identificationresults
Following this in Step 2 both identification and controlare implementedThe response of the controlled system witha reference input 119903
119896= sin(2120587119896100) by using OS-ELM
algorithm is shown in Figure 3We can see that the controlledsystem can track the reference model rapidly The controlerror is shown in Figure 4
42 Choice of the Data Set In simulation process we findthat OS-ELM is sensitive to the initial training data Initialtraining data determine the initial value of adaptive controldirectly When we select different initial training samplesthe performance of the control results varies greatly Figure5 shows the control results using OS-ELM neural networkswith different initial training samples
In Figure 6 we select 6 different ranges of random input-output pair We can find that different ranges of input valuesand the corresponding output values concentrate in a singlearea According to the output ranges of the controlled plantwe can choose a specific data area to train the neural networkIn Figure 5 the output range of the controlled system is inthe [minus02 02] If we adopt [0 02] random input (red area inFigure 6(b) initializing OS-ELM by using this set data) thecontrol result is shown in Figure 5(b) On the contrary if therandom input in the other area (Green Blue yellow area andso on in Figure 6(b)) is used to initialize OS-ELM the controlresult is poor (Figure 5(a))
On one hand in the actual production process we havea large number of field data which is distributed over a largerange On the other hand we generally know the objectiveoutput range of the controlled plant Then according to theoutput range of the controlled plant we can choose a specificdata area to train the neural network In this way the accuracyof control can be improved and the computation time can besaved
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 5
(a) Assign random parameters of hidden nodes (a119894 119887119894)
where 119894 = 1 2119871
(b) Calculate the initial hidden-layer output matrixΦ0
Φ0=[[
[
1206011
1206011198730
]]
]1198730times2119871
(31)
where 120601119894= [119866(a
1 1198871 x119894) sdot sdot sdot 119866(a
1 1198871 x119894)119866(a119871+1
119887119871+1
x119894)119906119894sdot sdot sdot 119866(a
2119871 1198872119871 x119894)119906119894] 119894 = 1 119873
0
(c) Calculate the initial output weight as follows
1205790= P0Φ119879
0Y0 (32)
where P0
= (Φ1198790Φ0)minus1
1205790
= [w0k0 ] and Y
0=
[1199102 sdot sdot sdot 1199101198730+1
]119879
(d) Set 119896 = 1198730+ 1
Step 2 Adaptive Control Phase (a) Calculate the control 119906119896
and the output of the plant 119910119896+1
119906119896=minus119891 [x
119896w119896] + 119903119896+1
119892 [x119896 k119896]
119910119896+1
= 1198910[x119896] + 1198920[x119896] 119906119896
(33)
where 1205791198730+1
= [w1198730+1k1198730+1 ] ≜ 120579
0 x119896= [119910119896minus119899+1
119910119896 119906119896minus119898
119906119896minus1
](b) Calculate the partial hidden-layer output matrix Φ
119896
for the 119896th observation data
Φ119896= [119866 (a
1 1198871 x119896) sdot sdot sdot 119866 (a
119871 119887119871 x119896) 119866 (a
119871+1 119887119871+1
x119896) 119906119896
sdot sdot sdot 119866 (a2119871 1198872119871 x119896) 119906119896] (34)
(c) Calculate the output weight
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
120579119896+1
= 120579119896+ 120590119896P119896Φ119879
119896119890lowast
119896+1
(35)
where P1198730
≜ P0
120590119896=
1
1003816100381610038161003816119890lowast
119896+1
1003816100381610038161003816
2
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2gt 1205762
0 otherwise(36)
(d) Set 119896 = 119896 + 1 Go to Step 2(a)
33 Stability Analysis Define the error between the neuralnetwork identification state and the reference model state as
119890119896+1
= 119910119896+1
minus 119903119896+1
(37)
that is
119890119896+1
= 119891 [x119896w119896] + 119892 [x
119896 k119896] 119906119896minus 119903119896+1
(38)
By (30) we have
119890119896+1
= 119903119896+1
minus 119903119896+1
= 0 (39)
The control error is defined as
119890119896+1
= 119910119896+1
minus 119903119896+1
(40)
that is
119890119896+1
= 119910119896+1
minus 119910119896+1
+ 119910119896+1
minus 119903119896+1
(41)
so
119890119896+1
= 119890lowast
119896+1+ 119890119896+1
= 119890lowast
119896+1 (42)
Using the properties of the parameter estimation algorithmstated inTheorem 1 we immediately have
lim119896rarrinfin
120590119896[
[
1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (43)
Because the activation function119866(a119894 119887119894 x) is bounded then119891
and119892 are bounded If 119906119896is chosen as (30) then 119906
119896is bounded
henceΦ119896is bounded From (43) we have
lim119896rarrinfin
sup1198902
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2le 1205762 (44)
Considering thatΦ119896is bounded then 119890
2
119896+1is bounded From
(40) 119910119896+1
= 119890119896+1
+ 119903119896+1
119903119896is bounded then 119910
119896+1is bounded
tooTo sum up we find that OS-ELM neural networks have
the capability of identification of and controling nonlinearsystems However as we know the conventional adaptivecontrol systems are usually based on a fixed or slowly adaptivemodel It cannot react quickly to abrupt changes and willresult in large transient errors before convergence In thiscase MMAC algorithm is presented as a useful tool Wecan construct multiple OS-ELM neural networks to coverthe uncertainty of the parameters of the plant by initializingOS-ELM neural network respectively in different positionMeanwhile an effective index function can be used to selectthe optimal identification model and the optimal controllerMMACbased onOS-ELMalgorithm can improve the controlproperty of the system greatly and avoid the influence ofjumping parameter on the plant
6 Abstract and Applied Analysis
34 Multiple Model Adaptive Control Multiple model adap-tive control can be regarded as an extension of conventionalindirect adaptive control The control system contains 119872
identification models denoted by 119868119897 119897 isin 1 2 119872
According to (16) and (30) multiple models set can be est-ablished as the following
119868119897 119910119897
119896+1= 119891119897[x119896w119897119896] + 119892119897[x119896 k119897119896] 119906119897
119896 (45)
where
119891119897[x119896w119897119896] =
119871
sum
119894=1
119908119897
119894119866 (a119894 119887119894 x119896)
119892119897[x119896 k119897119896] =
2119871
sum
119894=119871
V119897119894119866 (a119894 119887119894 x119896)
(46)
The control 119906119897119896can be rewritten as
119906119897
119896=
minus119891119897[x119896w119897119896] + 119903119896+1
119892119897 [x119896 k119897119896]
(47)
Define 119890lowast119897119896+1
≜ 119910119896+1
minus119910119897
119896+1 In any instant one of the models 119868
119897
is selected by a switching rule and the corresponding controlinput is used to control the plant Referring toTheorem 1 theindex function has the form
119869119897
119896=
119896
sum
120591=1
120590120591[
[
119890lowast2
120591+1
(1 + 120590120591Φ120591P120591minus1Φ119879120591)2minus 1205762]
]
(48)
At every sample time the model that corresponds to theminimum 119869
119897
119896is chosen that is 119868
119894is chosen if
119869119894
119896= min119897
119869119897
119896 (49)
and 119906119894
119896is used as the control input at that instant The
structure of multiple models adaptive control can be shownin Figure 2
Referring to paper [15] Narendra and Xiang established4 kinds of structure of MMAC (1) All models are fixed (2)all models are adaptive (3) (119872 minus 1) fixed models and oneadaptive model (4) (119872 minus 2) fixed models one free runningadaptivemodel and one reinitialized adaptivemodelWe canget a similar result of the stability For the detailed proof ofstability of MMAC the reader can refer to [15ndash17]
4 Simulation Results
41 Adaptive Control In this section we present resultsof simulations of adaptive control nonlinear discrete-timesystems by using OS-ELM neural networks The nonlinearsystems will be considered as below
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(50)
where we define 119891[119910119896] = (119910
119896lowast(1minus05lowast119865)+05lowast119865)(1+119910
2
119896)
and 119892[119910119896] = minus05 lowast (1 + 119910
119896) 119865 = 01 is a scalar The control
goal is to force the system states to track a reference modeltrajectory We select the reference model as
119903119896= 02 lowast sin(2120587119896
100) (51)
The single-inputsingle-output nonlinear discrete-time sys-tem can be modeled by
119910119896+1
= 119891 [119910119896w119896] + 119892 [119910
119896 k119896] 119906119896 (52)
where 119891 and 119892 are OS-ELM neural networks The neuralnetworks 119891 and 119892 have the same structure alefsym
1501 Based on
the OS-ELM adaptive control algorithm the parameters ofthe neural networks w
119896and k119896are updated to w
119896+1and k119896+1
using OS-ELM AlgorithmForOS-ELM control algorithm in Step 1 we choose119873
0=
300 the control 119906119896is selected randomly from [0 02] A small
chunk of off-line training data is used to calculate the initialoutput weights w
0and k
0 Since ELM algorithm just adjusts
the output weight it shows fast and accurate identificationresults
Following this in Step 2 both identification and controlare implementedThe response of the controlled system witha reference input 119903
119896= sin(2120587119896100) by using OS-ELM
algorithm is shown in Figure 3We can see that the controlledsystem can track the reference model rapidly The controlerror is shown in Figure 4
42 Choice of the Data Set In simulation process we findthat OS-ELM is sensitive to the initial training data Initialtraining data determine the initial value of adaptive controldirectly When we select different initial training samplesthe performance of the control results varies greatly Figure5 shows the control results using OS-ELM neural networkswith different initial training samples
In Figure 6 we select 6 different ranges of random input-output pair We can find that different ranges of input valuesand the corresponding output values concentrate in a singlearea According to the output ranges of the controlled plantwe can choose a specific data area to train the neural networkIn Figure 5 the output range of the controlled system is inthe [minus02 02] If we adopt [0 02] random input (red area inFigure 6(b) initializing OS-ELM by using this set data) thecontrol result is shown in Figure 5(b) On the contrary if therandom input in the other area (Green Blue yellow area andso on in Figure 6(b)) is used to initialize OS-ELM the controlresult is poor (Figure 5(a))
On one hand in the actual production process we havea large number of field data which is distributed over a largerange On the other hand we generally know the objectiveoutput range of the controlled plant Then according to theoutput range of the controlled plant we can choose a specificdata area to train the neural network In this way the accuracyof control can be improved and the computation time can besaved
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Abstract and Applied Analysis
34 Multiple Model Adaptive Control Multiple model adap-tive control can be regarded as an extension of conventionalindirect adaptive control The control system contains 119872
identification models denoted by 119868119897 119897 isin 1 2 119872
According to (16) and (30) multiple models set can be est-ablished as the following
119868119897 119910119897
119896+1= 119891119897[x119896w119897119896] + 119892119897[x119896 k119897119896] 119906119897
119896 (45)
where
119891119897[x119896w119897119896] =
119871
sum
119894=1
119908119897
119894119866 (a119894 119887119894 x119896)
119892119897[x119896 k119897119896] =
2119871
sum
119894=119871
V119897119894119866 (a119894 119887119894 x119896)
(46)
The control 119906119897119896can be rewritten as
119906119897
119896=
minus119891119897[x119896w119897119896] + 119903119896+1
119892119897 [x119896 k119897119896]
(47)
Define 119890lowast119897119896+1
≜ 119910119896+1
minus119910119897
119896+1 In any instant one of the models 119868
119897
is selected by a switching rule and the corresponding controlinput is used to control the plant Referring toTheorem 1 theindex function has the form
119869119897
119896=
119896
sum
120591=1
120590120591[
[
119890lowast2
120591+1
(1 + 120590120591Φ120591P120591minus1Φ119879120591)2minus 1205762]
]
(48)
At every sample time the model that corresponds to theminimum 119869
119897
119896is chosen that is 119868
119894is chosen if
119869119894
119896= min119897
119869119897
119896 (49)
and 119906119894
119896is used as the control input at that instant The
structure of multiple models adaptive control can be shownin Figure 2
Referring to paper [15] Narendra and Xiang established4 kinds of structure of MMAC (1) All models are fixed (2)all models are adaptive (3) (119872 minus 1) fixed models and oneadaptive model (4) (119872 minus 2) fixed models one free runningadaptivemodel and one reinitialized adaptivemodelWe canget a similar result of the stability For the detailed proof ofstability of MMAC the reader can refer to [15ndash17]
4 Simulation Results
41 Adaptive Control In this section we present resultsof simulations of adaptive control nonlinear discrete-timesystems by using OS-ELM neural networks The nonlinearsystems will be considered as below
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(50)
where we define 119891[119910119896] = (119910
119896lowast(1minus05lowast119865)+05lowast119865)(1+119910
2
119896)
and 119892[119910119896] = minus05 lowast (1 + 119910
119896) 119865 = 01 is a scalar The control
goal is to force the system states to track a reference modeltrajectory We select the reference model as
119903119896= 02 lowast sin(2120587119896
100) (51)
The single-inputsingle-output nonlinear discrete-time sys-tem can be modeled by
119910119896+1
= 119891 [119910119896w119896] + 119892 [119910
119896 k119896] 119906119896 (52)
where 119891 and 119892 are OS-ELM neural networks The neuralnetworks 119891 and 119892 have the same structure alefsym
1501 Based on
the OS-ELM adaptive control algorithm the parameters ofthe neural networks w
119896and k119896are updated to w
119896+1and k119896+1
using OS-ELM AlgorithmForOS-ELM control algorithm in Step 1 we choose119873
0=
300 the control 119906119896is selected randomly from [0 02] A small
chunk of off-line training data is used to calculate the initialoutput weights w
0and k
0 Since ELM algorithm just adjusts
the output weight it shows fast and accurate identificationresults
Following this in Step 2 both identification and controlare implementedThe response of the controlled system witha reference input 119903
119896= sin(2120587119896100) by using OS-ELM
algorithm is shown in Figure 3We can see that the controlledsystem can track the reference model rapidly The controlerror is shown in Figure 4
42 Choice of the Data Set In simulation process we findthat OS-ELM is sensitive to the initial training data Initialtraining data determine the initial value of adaptive controldirectly When we select different initial training samplesthe performance of the control results varies greatly Figure5 shows the control results using OS-ELM neural networkswith different initial training samples
In Figure 6 we select 6 different ranges of random input-output pair We can find that different ranges of input valuesand the corresponding output values concentrate in a singlearea According to the output ranges of the controlled plantwe can choose a specific data area to train the neural networkIn Figure 5 the output range of the controlled system is inthe [minus02 02] If we adopt [0 02] random input (red area inFigure 6(b) initializing OS-ELM by using this set data) thecontrol result is shown in Figure 5(b) On the contrary if therandom input in the other area (Green Blue yellow area andso on in Figure 6(b)) is used to initialize OS-ELM the controlresult is poor (Figure 5(a))
On one hand in the actual production process we havea large number of field data which is distributed over a largerange On the other hand we generally know the objectiveoutput range of the controlled plant Then according to theoutput range of the controlled plant we can choose a specificdata area to train the neural network In this way the accuracyof control can be improved and the computation time can besaved
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 7
Model IM
Model Il
Model I1
Plant
ylk
uMkSwitching
ulk
u1k
uk
Controller C1
Controller Cl
Controller CM
Ref model
yMk
y1k
+ minus
+ minus
+ minus
+ minus
yk
rk
rk
elowastMk
elowastlk
elowast1k
Identification error
Control error
Figure 2 Structure of multiple models adaptive control
06
04
02
0
minus02
minus040 100 200 300 400 500 600 700 800
k
rkyk
Figure 3 Nonlinear systems adaptive control by using OS-ELM
43 Multiple Models Adaptive Control From the above sim-ulation result we find that OS-ELM neural networks showperfect capability of identification But from the OS-ELMadaptive control algorithm we can see that it contains twosteps (initialization phase and adaptive control phase) Thisalgorithm just shows the perfect capability of identificationwith fixed or slowly time-varying parameters For the systemwith jumping parameters once the change of parametershappens the OS-ELM adaptive control algorithm needs toimplement Step 1 again or the control performance will bevery poor To solve this kind of problem MMAC (multiplemodels adaptive control) will be presented as a very usefultool
0 100 200 300 400 500 600 700 800
k
005
0
minus005
Figure 4 Control error
Consider the controlled plant as follows
119910119896+1
=119910119896lowast (1 minus 05 lowast 119865) + 05 lowast 119865
1 + 1199102
119896
minus 05 lowast (1 + 119910119896) 119906119896
(53)
where
119865 =
01 1 le 119896 lt 500
015 500 le 119896 lt 1000
0355 1000 le 119896 le 1500
(54)
We choose the index function as (48) and construct 4 OS-ELM neural network models Among them the weightsof 1 model are reinitialized ones 2 3 models are fixedmodels which are formed according to the environment ofthe changing parameters 4 is free running adaptive model
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Abstract and Applied Analysis
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(a) (minus02 0) random input
0 100 200 300 400 500 600 700 800
k
06
04
02
0
minus02
minus04
rkyk
(b) (0 02) random input
Figure 5 OS-ELM control results with different initial training samples
08
06
04
02
0
minus02
minus04
minus 06
minus08
minus10 50 100 150 200 250 300
yk
k
Input [minus02 0]Input [0 02]Input [02 05]
Input [05 1]Input [1 15]Input [15 2]
(a)
08
06
04
02
0
minus02
minus04
minus06
minus08
minus1minus02 0 02 04 06 08 1 12 14 16 18
yk
uk
Data 1Data 2Data 3
Data 4Data 5Data 6
(b)
Figure 6 Data set (a) Output results with different input data (b) Different ranges of input values and the corresponding output valuesconcentrate in a single area
In any sample time 1model chooses the optimalmodel fromthe models set which is formed by models (2 3 and 4
model) Figure 7 shows the response of multiple models andsignal model
In Figure 7(a) we can see that signal model adaptivecontrol shows poor control quality for the system withjumping parameters Once the parameters change abruptlythe neural network model needs to adjust its weight whichis far from the true value Figure 7(c) shows the bad controlinput InMMAC the reinitialized adaptivemodel can initiatethe weights by choosing the best weights of a set of modelsbased on the past performance of the plant At every sampletime based on the index function one of the best modelsin model set is selected the parameters of reinitializedmodel can be updated from the paraments of this modelFrom Figure 7(b) we can see that MMAC can improvethe control quality dramatically compared to Figure 7(a)
Figure 7(d) shows the control sequence and Figure 7(e)shows the switching procedure of controlling MMAC
5 Conclusion
The application of OS-ELM in the identification and controlof nonlinear dynamic system is studied carefully OS-ELMneural network can improve the accuracy of identificationand the control quality dramatically Since the trainingprocess of OS-ELM neural network is sensitive to the initialtraining data different scope of training data can be used toinitialize OS-ELM neural network based on the output rangeof the system The topological structure of OS-ELM neuralnetwork can be adjusted dynamically by usingmultiplemodelswitching strategy when it is used to control systems withjumping parameters Simulation results show that MMAC
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 9
06
04
02
0
minus02
minus040 500 1000 1500
Single model
k
rkyk
(a)
06
04
02
0
minus02
minus040 500 1000 1500
k
Multiple models
rkyk
(b)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlsingle model
(c)
1
08
06
04
02
0
minus02
minus04
minus060 500 1000 1500
k
Controlmultiple model
(d)
6
5
4
3
2
1
0200 400 600 800 1000 1200 1400
k
Switching process
(e)
Figure 7 Multiple OS-ELM neural network models adaptive control
based on OS-ELM which is presented in this paper canimprove the control performance dramatically
Appendix
Lemma A1 (matrix inversion lemma see [18]) If
Pminus1119896
= Pminus1119896minus1
+Φ119879
119896Φ119896120590119896 (A1)
where the scalar 120590119896gt 0 then P
119896is related to P
119896minus1via
P119896= P119896minus1
minus120590119896P119896minus1Φ119879119896Φ119896P119896minus1
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A2)
Also
P119896Φ119879
119896=
P119896minus1Φ119879119896
1 + 120590119896Φ119896P119896minus1Φ119879119896
(A3)
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Abstract and Applied Analysis
Proof of Theorem 1 Defining 120573119896≜ 1(1+120590
119896Φ119896P119896minus1Φ119879119896) then
from (20) (22) and (A3) we have
120579119896+1
= 120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1 (A4)
where 120579119896+1
≜ 120579lowast
0minus 120579119896+1
Then from (A4) we can obtain
Φ119896120579119896+1
+ Δ119891= 120573119896119890lowast
119896+1 (A5)
Introducing Lyapunov function as119881119896+1
= 120579119879
119896+1Pminus1119896120579119896+1
taking(A1) into it hence we can obtain
119881119896+1
= 120579119879
119896+1Pminus1119896minus1
120579119896+1
+ 120579119879
119896+1Φ119879
119896Φ119896120590119896120579119896+1
= [120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]119879
Pminus1119896minus1
[120579119896minus 120590119896120573119896P119896minus1Φ119879
119896119890lowast
119896+1]
+ 120590119896(Φ119896120579119896+1
)2
= 120579119879
119896Pminus1119896minus1
120579119896minus 2120590119896Φ119896120579119896120573119896119890lowast
119896+1
+ 1205902
1198961205732
119896119890lowast2
119896+1Φ119896P119896minus1Φ119879
119896+ 120590119896(Φ119896120579119896+1
)2
(A6)
Submitting (A5) in the above equation
119881119896+1
= 119881119896minus 2120590119896Φ119896120579119896(Φ119896120579119896+1
+ Δ119891)
+ 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
+ 120590119896(Φ119896120579119896+1
)2
(A7)
Referring to [14] we can obtain
119881119896+1
= 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
minus 1205902
119896Φ119896P119896minus1Φ119879
119896(Φ119896120579119896+1
+ Δ119891)2
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
(A8)
that is
119881119896+1
le 119881119896minus 120590119896(Φ119896120579119896+1
)2
minus 2120590119896Φ119896120579119896+1
Δ119891
= 119881119896+ 120590119896Δ2
119891minus 120590119896(Φ119896120579119896+1
+ Δ119891)2
= 119881119896+ 120590119896Δ2
119891minus 1205901198961205732
119896119890lowast2
119896+1
= 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus Δ2
119891)
(A9)
Finally if we choose 120590119896as (36) we have
119881119896+1
le 119881119896minus 120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A10)
that is
119881119896+1
minus 119881119896le minus120590119896(1205732
119896119890lowast2
119896+1minus 1205762) le 0 (A11)
119881119896is nonnegative then
120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
119896Pminus1119896minus1
120579119896le 120579119879
1Pminus101205791 (A12)
Now from the matrix inversion lemma it follows that
120582min (Pminus1
119896) ge 120582min (P
minus1
119896minus1) ge 120582min (P
minus1
0) (A13)
this implies
120582min (Pminus1
0)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120582min (Pminus1
119896)10038171003817100381710038171003817120579119896+1
10038171003817100381710038171003817
2
le 120579119879
119896+1Pminus1119896120579119896+1
le 120579119879
1Pminus101205791le 120582min (P
minus1
0)100381710038171003817100381710038171205791
10038171003817100381710038171003817
2
(A14)
This establishes part (i)Summing both sides of (A11) from 1 to 119873 with 119881
119896being
nonnegative we have
119881119896+1
le 1198811minus lim119873rarrinfin
119873
sum
119896=1
120590119896(1205732
119896119890lowast2
119896+1minus 1205762) (A15)
we immediately get
lim119873rarrinfin
119873
sum
119896=1
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
lt infin (A16)
(ii) holds Then
lim119896rarrinfin
120590119896[
[
119890lowast2
119896+1
(1 + 120590119896Φ119896P119896minus1Φ119879119896)2minus 1205762]
]
= 0 (A17)
(a) and (b) hold
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is partially supported by the Fund of NationalNatural Science Foundation of China (Grant no 61104013)Program for New Century Excellent Talents in Universi-ties (NCET-11-0578) the Fundamental Research Funds forthe Central Universities (FRF-TP-12-005B) and SpecializedResearch Fund for the Doctoral Program of Higher Educa-tion (20130006110008)
References
[1] C Yang S S Ge and T H Lee ldquoOutput feedback adaptive con-trol of a class of nonlinear discrete-time systems with unknowncontrol directionsrdquoAutomatica vol 45 no 1 pp 270ndash276 2009
[2] X-S Wang C-Y Su and H Hong ldquoRobust adaptive control ofa class of nonlinear systems with unknown dead-zonerdquo Auto-matica vol 40 no 3 pp 407ndash413 2004
[3] M Chen S S Ge and B Ren ldquoAdaptive tracking control ofuncertain MIMO nonlinear systems with input constraintsrdquoAutomatica vol 47 no 3 pp 452ndash465 2011
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 11
[4] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[5] D E Rumelhart G E Hinton and R J Williams ldquoLearningrepresentations by back-propagating errorsrdquo Nature vol 323no 6088 pp 533ndash536 1986
[6] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks pp 985ndash990 July 2004
[7] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[8] G-B Huang D HWang and Y Lan ldquoExtreme learningmach-ines a surveyrdquo International Journal of Machine Learning andCybernetics vol 2 no 2 pp 107ndash122 2011
[9] N-Y Liang G-B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[10] S Suresh R Venkatesh Babu and H J Kim ldquoNo-referenceimage quality assessment using modified extreme learningmachine classifierrdquo Applied Soft Computing Journal vol 9 no2 pp 541ndash552 2009
[11] A A Mohammed R Minhas Q M Jonathan Wu and M ASid-Ahmed ldquoHuman face recognition based on multidimen-sional PCA and extreme learning machinerdquo Pattern Recogni-tion vol 44 no 10-11 pp 2588ndash2597 2011
[12] F BenotM VanHeeswijk YMiche et al ldquoFeature selection fornonlinear models with extreme learning machinesrdquoNeurocom-puting vol 102 pp 111ndash124 2013
[13] E K P Chong and S H Zak An Introduction to OptimizationWiley New York NY USA 2013
[14] H Jiang ldquoDirectly adaptive fuzzy control of discrete-timechaotic systems by least squares algorithm with dead-zonerdquoNonlinear Dynamics vol 62 no 3 pp 553ndash559 2010
[15] K S Narendra and C Xiang ldquoAdaptive control of discrete-timesystems using multiple modelsrdquo Institute of Electrical and Ele-ctronics Engineers Transactions on Automatic Control vol 45no 9 pp 1669ndash1686 2000
[16] X-L Li D-X Liu J-Y Li and D-W Ding ldquoRobust adaptivecontrol for nonlinear discrete-time systems by using multiplemodelsrdquoMathematical Problems in Engineering vol 2013 Arti-cle ID 679039 10 pages 2013
[17] X L Li C Jia D X Liu et al ldquoNonlinear adaptive control usingmultiple models and dynamic neural networksrdquo Neurocomput-ing vol 136 pp 190ndash200 2014
[18] G C Goodwin and K S Sin Adaptive Filtering Prediction andControl Prentice-Hall 1984
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of