NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs ....

8
NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs. S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics and Instrumentation SRM Institute of Science and Technology, Chennai, India. Email id: [email protected] Abstract - The aim of the project is to design and train a neural network for controlling the exothermic operational parameters of a CSTR - Continuous Stirred Tank Reactor, one of the most common reactor in chemical industrial world. The paper is a comprehensive study of using this method to control the CSTR operation. We have compared the system parameters of a network controlled by MPC (Model Predictive Control) and conventional PID, through the development of input - output relationships giving some drawbacks of the system and developed an intention to go through the training and development of multiple layered feed forward neural network using Back-Propagation algorithm reducing the operational errors through weighted adjustments. The mathematical model is developed through mass balance and energy balance equations of the reactor through state space analysis and design of the hardware for our project have also been included. Mathematical model design and simulation are done using MATLAB. Keywords - Neural network, CSTR, MPC, PID, Back-Propagation algorithm, Mathematical model, MATLAB. I. INTRODUCT ION CSTR is a complex, nonlinear system, is one of the common reactors in chemical plant. Artificial Neural Networks (ANN) will be used to model the CSTR incorporating its non-linear characteristics. Usually the industrial reactors are controlled using linear PID control configurations and the tuning of controller parameters is based on the linearization of the reactor models in a small neighborhood around the stationary operating points. If the process is subjected to larger disturbances and/or it operates at conditions of higher state sensitivity, the state trajectory can considerably deviate from the aforementioned neighborhood and consequently,deterioratesthe performance of the controller. The modeling and control of an isothermal CSTR using neural networks. Multiple layer feed forward neural network with back- propagation algorithm is used. CSTR operation is in steady state but any condition change and temporary shutdown leads to transient. Even the feed rate input is also transient by nature.The use of neural networks in chemical engineering field offers potentially effective means of handling three difficult problems: Complexity, non-linearity and uncertainties. The three steps involved in the ANN model development are -Generation of input-output data -Network Architecture selection -Model validation II. MATHEMATICAL MODELLING OF CSTR The following assumptions are made to obtain the simplified modelling equations of an ideal CSTR: A. Perfect mixing in the reactor and jacket. B. Constant volume reactor and jacket. International Journal of Pure and Applied Mathematics Volume 118 No. 20 2018, 3415-3421 ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu Special Issue ijpam.eu 3415

Transcript of NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs ....

Page 1: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

NEURAL NETWORK CONTROLLER FOR

CONTINUOUS STIRRED TANK REACTOR

Mrs. S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS

Department of Electronics and Instrumentation

SRM Institute of Science and Technology, Chennai, India.

Email id: [email protected]

Abstract - The aim of the project is to design and train a

neural network for controlling the exothermic operational

parameters of a CSTR - Continuous Stirred Tank Reactor,

one of the most common reactor in chemical industrial

world. The paper is a comprehensive study of using this

method to control the CSTR operation. We have compared

the system parameters of a network controlled by MPC

(Model Pred ictive Control) and conventional PID, through

the development of input - output relationships giving some

drawbacks of the system and developed an intention to go

through the training and development of multip le layered

feed forward neural network using Back-Propagation

algorithm reducing the operational errors through weighted

adjustments. The mathemat ical model is developed through

mass balance and energy balance equations of the reactor

through state space analysis and design of the hardware for

our project have also been included. Mathematical model design and

simulation are done using MATLAB. Keywords - Neural network, CSTR, MPC, PID,

Back-Propagation algorithm, Mathematical model,

MATLAB.

I. INTRODUCTION

CSTR is a complex, nonlinear system, is one of the

common reactors in chemical plant. Artificial Neural

Networks (ANN) will be used to model the CSTR

incorporating its non-linear characteristics. Usually the

industrial reactors are controlled using linear PID control

configurations and the tuning of controller parameters is

based on the linearizat ion of the reactor models in a small

neighborhood around the stationary operating points. If the

process is subjected to larger d isturbances and/or it operates

at conditions of higher state sensitivity, the state

trajectory can considerably deviate from the

aforementioned neighborhood and

consequently,deterioratesthe performance

of the controller.

The modeling and control of an isothermal

CSTR using neural networks. Mult iple layer

feed forward neural network with back-

propagation algorithm is used.

CSTR operation is in steady state but any

condition change and temporary shutdown leads

to transient. Even the feed rate input is also

transient by nature.The use of neural networks

in chemical engineering field offers potentially

effective means of handling three difficult

problems: Complexity, non-linearity and

uncertainties. The three steps involved in the ANN model

development are -Generation of input-output

data -Network Architecture selection -Model validation

II. MATHEMATICAL MODELLING OF CSTR

The following assumptions are made to obtain

the simplified modelling equations of an ideal

CSTR:

A. Perfect mixing in the reactor and jacket. B. Constant volume reactor and jacket.

International Journal of Pure and Applied MathematicsVolume 118 No. 20 2018, 3415-3421ISSN: 1314-3395 (on-line version)url: http://www.ijpam.euSpecial Issue ijpam.eu

3415

Page 2: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

C. Liquid density and heat capacity are constant. D. Consider simple exothermic, first order reaction. E. Reactor perfectly insulated. F. No energy balance consideration for jacket.

The mathematical model for this process is formulated

by carrying out mass and energy balances, and

introducing appropriate consecutive equations. Component Mass Balance Equation V*ΔCa = q/V*(Caf – C) - k0 * e^(-Ea/RT) * Ca Energy Balance Equation ρ*Cp*V*ΔT = q*ρ*Cp (Tf - T) + ΔV*ΔH* k0*e^(-Ea/RT)

* Ca + UA(Tc - T)

Parameters

Tc = Temp. of cooling jacket (K)

q = Volumetric flow rate (m^3/s)

V = Volume of CSTR (m^3)

ρ = Density of (A-B) mixture

Cp = Heat capacity of (A-B) mixture (J/kgK)

ΔH = Heat of reaction for A-B (J/mol)

k0 = Pre-exponential factor (/s)

UA = Overall heat transfer co-efficient

R = Universal gas constant

Caf = Feed concentration (mol/m^3)

Tf = Feed temp. (K)

Ea = Activation energy (J)

Ca = Conc. of A in CSTR (mol/m^3)

T = Temp. in CSTR (K)

Function Variables : Ca, T Fixed Values: V, ρ, Cp, ΔH, k0, UA Manipulated Variables : Tc, q, Caf, Tf State and Control Variables : Ca and T respectively

Linearization: The non-linear equations are linearized and

cast into the state variable form as follows: x= Ax + Bu; y =

Cx; where matrices A and B represent the Jacobian matrices

corresponding to the nominal values of the state variables and

input variables and x , u and y represent the

deviation variables. The output matrix is

represented as C.

FIG. SIMULINK MODEL

III. FEED FORWARD NEURAL

NETWORK

FIG. STRUCTURE OF NEURAL NETWORK

A collection o f neurons connected together in a

network can be represented by a directed graph:

Nodes represent the neurons, and arrows

represent the links between them. Each node has

its number, and a link connecting two nodes will

have a pair of numbers (e.g. (1,4) connecting

nodes 1 and 4). Networks without cycles

(feedback loops) are called a feed-forward

networks (or perceptron).

Input and Output Nodes: Input nodes of the

network (nodes 1, 2 and 3) are associated with

the input variables (x1,...,xm). They do not

compute anything, but simply pass the values to

the processing nodes. Output nodes (4 and 5) are

associated with the output variables (y1,...,yn).

International Journal of Pure and Applied Mathematics Special Issue

3416

Page 3: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

Hidden Nodes and Layers --- A neural network may have

hidden nodes — they are not connected directly to the

environment (‗hidden‘ inside the network):

We may organise nodes in layers: input (nodes 1,2 and 3),

hidden (4 and 5) and output (6 and 7) layers. Neural networks

can have several hidden layers.

Training algorithms: The process of finding a set of weights such that for a given

input the network produces the desired output is called

training.

Algorithms for train ing neural networks can be supervised

(i.e . with a ‗teacher‘) and unsupervised (self-organising).

Supervised algorithms use a train ing set- a set of pairs (x,y)

of inputs with their corresponding desired outputs.

An outline of a supervised learning algorithm:

1. Initially, set all the weights wij to some random values

2. Repeat (a) Feed the network with an input x from one of

the examples in the training set (b) Compute the network‘s

output f(x) (c) Change the weights wij of the nodes 3. Until the error c(y,f(x)) is small.

IV. APPLICATIONS

1. Pattern Classification

The set of all input values is called the input pattern, and the

set of output values the output pattern x = (x1,...,xm) → y =

(y1,...,yn). A neural network ‗learns‘ the relation between

diff erent input and output patterns. Thus, a neural network

performs pattern classification or pattern recognition (i.e.

classifies inputs into output categories).

2. Time Series Analysis The aim of the analysis is to learn to predict the future

values[x(t1),x(t2),...,x(tm)].

We may use a neural network to analyze time series.

Input: Consider (m) values in the past x(t1),x(t2),...,x(tm) as

(m) input variables. Output: Consider (n) future values y(tm+1), y(tm+2),...,

y(tm+n) as (n) output variables. Our goal is to find the fo llowing model

(y(tm+1),...,y(tm+n))≈ f(x(t),x(t1),...,x(tm))

By train ing a neural network with m inputs and n

outputs on the time series data, we can create

such a model.

Here, in this project, the first application is

chosen as prime option.

V. TRAINING OF MULTI LAYERED

NETWORK

We need to select a network structure (number of

hidden layers, hidden nodes, and connectivity)

then we need to select transfer functions that are

differentiable and then to define a (differentiable)

error function.

We need to search for weights that min imize the

error function, using gradient descent or other

optimization method.

FIG.: Over fitting with Neural Networks

If number of hidden units (and weights) is large,

it is easy to memorize the training set (or parts of

it) and not generalize. Typically, the optimal

number o f h idden units is much smaller than the

input units. Each hidden layer maps to a space of

smaller dimension.

The weights that minimize the erro r function may

create complicate decision surfaces. We need to

stop minimizat ion early by using a validation data

set. This gives a preference to smooth and simple

surfaces.

FIG. TYPICAL TRAINING CURVE

International Journal of Pure and Applied Mathematics Special Issue

3417

Page 4: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

VI. BACK PROPAGATION ALGORITHM

The back propagation training algorithm is an extension of

the Widrow Hoff algorithm. It uses a gradient descent

technique to minimize a cost function equal to the mean

squared difference between the desired and the actual

network outputs. The network first uses the input vector to

produce its own output vector (actual network output) and

then compares this actual output with the desired output, or

target vector. If there is no difference, no train ing takes place,

otherwise the weights of the network are changed to reduce

the difference between actual and desired outputs.

Cost Function of Back-Propagation Network

The cost function that the Back-Propagation network tries

to

minimize is the squared difference between the actual and

desired output value summed over the output units and

all pairs of input and output vectors.

Let E(n) = 1/2 ∑ (dj(n) - oj(n))²; j = 1 to M be a measure of error on input/output pattern n where dj(n) is the

desired output for the jth component of the output vector for

input pattern n, M is the number of output units,and o j(n) is the

jth element of the actual output vector produced by the

presentation of input pattern n.

Let Shj(n) = ∑ whji(n) xi(n); i = 1 to N be the weighted sum

input to unit j in the hidden layer produced by the

presentation of input pattern n, where (whji) is the weight

connecting input unit i and hidden unit j, and N is the number

of input units.

Similarly, let Soj(n) = ∑ woji ohi(n); i = 1 to H be the

weighted sum input to unit j in the output layer produced by

the presentation of input pattern n, where H is the number of

hidden units, hi(n) is the output of hidden unit (i) produced

by the presentation of input pattern n,and woji is the weight

connecting hidden unit i and output unit j. The outputs of the hidden units and output units are,

respectively-

ohj(n) = f(shj(n)) oj(n) = f(soj(n))

where f is a differentiable and non-decreasing non-linear

transfer function. Let E = ∑ E(n); n = 1 to P be the overall measure of error, where P is the total number of

the training samples. E is called the Cost Function of the back

propagation network. The back propagation algorithm is the

technique which finds the weights that min imize the cost

function E.

VII. SIMULATION RESULTS

FIG. TRAINING WINDOW OF ANN

FIG. MLBPN TRAINING RESPONSE Obtained Output (y Out) = 0.4238

.

FIG. PATTERNS OBTAINED FROM MLBPN

Training time = 0.4789 End of Back Propagation

of errors , Total_error =0.1557

International Journal of Pure and Applied Mathematics Special Issue

3418

Page 5: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

FIG. FOPT MODEL FIT

FIG COMPARISON OF PID,MPC & NNPC

RESPONSE.

FIG. VARIATION OF MANUPULATED VARIABLE

TABLE I. PERFORMANCE INDICES AND TIME

SPECIFICATIONS OF REGULATORY RESPONSE FOR PID,

FIG. STEP RESPONSE OF THE SYSTEM MPC AND NNPC

International Journal of Pure and Applied Mathematics Special Issue

3419

Page 6: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

VIII.CONTROLLER METHODS A. Proportional-Integral-Derivative (PID) controller

PID controllers are widely used in various process industries

due to their effectiveness and simplicity. It is a type of

feedback controller whose output and control variable is

based on the error between set point and measured process

variable. The error signal e(t)used to generate the

proportional, integral and derivative actions. A mathematical

description of the PID controller is:

U(t)= Kpe(t) + 1/Ti ʃ e(t)dt + Tde(t)/dt

where, KP - Proportional gain TI- Integral time constant

TD- Derivative time constant.

C. Neural Network Predictive Controller (NNPC)

Neural Network Pred ictive Control (NNPC) is basically a

model based predictive control. It uses a neural network

model of the process, a history of past control moves and

an optimization cost function over the receding prediction

horizon to calculate the optimal control moves . There are typically two steps which are combined to

design the NNPC algorithm:

System identification using neural network

MPC design using NN model as a predictor

FIG. PID BLOCK DIAGRAM

B. Model Predictive Controller(MPC)

MPC is an optimal control strategy based on numerical

optimization. Future control inputs and future plant

responses are predicted using a system model and

optimized at regular intervals with respect to a

performance index constraints on system inputs and

states. MPC consists of three main components as shown in

Fig. 3 namely The process model

The cost function

The optimizer

The process model includes the information about the

controlled process and it is used to predict the response of

the process values according to manipulated variables.The

minimizat ion of cost function ensures that the error is

reduced. In the last step different optimization techniques

are applied and the output gives the input sequence for the

next prediction horizon.

FIG. BLOCK DIAGRAM OF MPC

FIG. NNPC BLOCK DIAGRAM

REFERENCES

1.https://drive.google.com/drive/folders/1sNMVuEGIS

eU964PzKTyAxZSFINiByOui [FOR ALL REFERRED

MATERIALS, RESEARCH

DATAS,RESEARCH PAPERS

REFERRED]

2.https://drive.google.com/drive/folders/1tYCWicKPX

W6tcTAEYmRlW2yKP7ewnBo1 [ FOR ALL THE DEVELOPED MATLAB CODES

AND DESIGN OF CSTR VIRTUAL MODEL]

3. P.Mohamed Shakeel, ―NEURAL NETWORKS

BASED PREDICTION OF WIND ENERGY USING

PITCH ANGLE CONTROL‖, International Journal of

Innovations in Scientific and Engineering Research

(IJISER), Vol.1, No.1, pp.33-37, 2014. 4.Control of Continuous Stirred Tank Reactor using

Neural Network Based Predictive Controller;

Imperial Journal of Interdisciplinary Research; DOI:

9-2017; Durgadevi. A., Priyadarshini. S. A; Dept. of

EEE; K. Ramakrishnan College of Engineering,

Trichy (base paper)

5.Direct Inverse Neural Network Control of a

CSTR;IMECS 2009; DOI: 20-03-09; D.B. Anuradha,

G. Prabhaker Reddy, J.S.N. Murthy; Hong Kong

(reference paper)

International Journal of Pure and Applied Mathematics Special Issue

3420

Page 7: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

3421

Page 8: NEURAL NETWORK CONTROLLER FOR …NEURAL NETWORK CONTROLLER FOR CONTINUOUS STIRRED TANK REACTOR Mrs . S.SHARANYA, SUPRIYO DEY, ROHIT DASGUPTA, ANUBHAB BISWAS Department of Electronics

3422