Research Article Design Feed Forward Neural Network to Solve...
Transcript of Research Article Design Feed Forward Neural Network to Solve...
Hindawi Publishing CorporationISRN Applied MathematicsVolume 2013 Article ID 650467 7 pageshttpdxdoiorg1011552013650467
Research ArticleDesign Feed Forward Neural Network to Solve SingularBoundary Value Problems
Luma N M Tawfiq and Ashraf A T Hussein
Department of Mathematics College of Education Ibn Al-Haitham Baghdad University Iraq
Correspondence should be addressed to Ashraf A T Hussein ashraf adnan88yahoocom
Received 14 May 2013 Accepted 9 June 2013
Academic Editors Z Huang and X Wen
Copyright copy 2013 L N M Tawfiq and A A T Hussein This is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited
The aim of this paper is to design feed forward neural network for solving second-order singular boundary value problems inordinary differential equationsThe neural networks use the principle of back propagationwith different training algorithms such asquasi-Newton Levenberg-Marquardt andBayesianRegulation Two examples are considered to show that effectiveness of using thenetwork techniques for solving this type of equationsThe convergence properties of the technique and accuracy of the interpolationtechnique are considered
1 Introduction
The study of solving differential equations using artificialneural network (Ann) was initiated by Agatonovic-Kustrinand Beresford in [1] Lagaris et al in [2] employed twonetworks a multilayer perceptron and a radial basis functionnetwork to solve partial differential equations (PDE) withboundary conditions defined on boundaries with the case ofcomplex boundary geometry Tawfiq [3] proposed a radialbasis function neural network (RBFNN) and Hopfield neuralnetwork (unsupervised training network) Neural networkshave been employed before to solve boundary and initialvalue problems Malek and Shekari Beidokhti [4] reporteda novel hybrid method based on optimization techniquesand neural networks methods for the solution of high orderODE which used three-layered perceptron network Akca etal [5] discussed different approaches of using wavelets inthe solution of boundary value problems (BVP) for ODEalso introduced convenient wavelet representations for thederivatives for certain functions and discussed wavelet net-work algorithm Mc Fall [6] presented multilayer perceptronnetworks to solve BVP of PDE for arbitrary irregular domainwhere he used logsig transfer function in hidden layer andpure line in output layer and used gradient decent training
algorithm also he used RBFNN for solving this problemand compared between them Junaid et al [7] used Annwith genetic training algorithm and log sigmoid function forsolving first-orderODEAbdul Samath et al [8] suggested thesolution of the matrix Riccati differential equation (MRDE)for nonlinear singular system using Ann Ibraheem andKhalaf [9] proposed shooting neural networks algorithmfor solving two-point second-order BVP in ODEs whichreduced the equation to the system of two equations of firstorder Hoda and Nagla [10] described a numerical solutionwith neural networks for solving PDE with mixed boundaryconditions Majidzadeh [11] suggested a new approach forreducing the inverse problem for a domain to an equivalentproblem in a variational setting using radial basis functionsneural network also he used cascade feed forward to solvetwo-dimensional Poisson equation with back propagationand Levenberg-Marquardt train algorithm with the archi-tecture three layers and 12 input nodes 18 tansig transferfunctions in hidden layer and 3 linear nodes in output layerOraibi [12] designed feed forward neural networks (FFNNs)for solving IVP of ODE Ali [13] designed fast FFNN tosolve two-point BVPThis paper proposed FFNN to solve twopoint singular boundary value problem (TPSBVP) with backpropagation (BP) training algorithm
2 ISRN Applied Mathematics
2 Singular Boundary Value Problem
The general form of the 2nd-order two-point boundary valueproblem (TPBVP) is
11991010158401015840+ 119875 (119909) 119910
1015840+ 119876 (119909) 119910 = 0 119886 le 119909 le 119887
119910 (119886) = 119860 119910 (119887) = 119861 where 119860 119861 isin 119877(1)
there are two types of a point 1199090isin [0 1] ordinary point and
singular pointA function 119910(119909) is analytic at 119909
0if it has a power series
expansion at 1199090that converges to 119910(119909) on an open interval
containing 1199090 A point 119909
0is an ordinary point of the ODE (1)
if the functions 119875(119909) and 119876(119909) are analytic at 1199090 Otherwise
1199090is a singular point of the ODE On the other hand if 119875(119909)
or 119876(119909) are not analytic at 1199090 then 119909
0is said to be a singular
point [14 15]There is at present no theoretical work justifying numer-
ical methods for solving problems with irregular singularpoints The main practical occurrence of such problemsseems to be semianalytic technique [16]
3 Artificial Neural Network
Ann is a simplified mathematical model of the humanbrain It can be implemented by both electric elements andcomputer software It is a parallel distributed processor withlarge numbers of connections it is an information processingsystem that has certain performance characters in commonwith biological neural networks [17]
The arriving signals called inputs multiplied by theconnection weights (adjusted) are first summed (combined)and then passed through a transfer function to produce theoutput for that neuronThe activation (transfer) function actson the weighted sum of the neuronrsquos inputs and the mostcommonly used transfer function is the sigmoid function(tansig) [13]
There are two main connection formulas (types) feed-back (recurrent) and feed forward connections Feedback isone type of connection where the output of one layer routesback to the input of a previous layer or to the same layer Feedforward neural network (FFNN) does not have a connectionback from the output to the input neurons [18]
There are many different training algorithms but themost often used training algorithm is the Delta rule or backpropagation (BP) rule A neural network is trained to mapa set of input data by iterative adjustment of the weightsInformation from inputs is fed forward through the networkto optimize the weights between neurons Optimization ofthe weights is made by backward propagation of the errorduring training phase
The Ann reads the input and output values in the trainingdata set and changes the value of the weighted links to reducethe difference between the predicted and target (observed)values The error in prediction is minimized across manytraining cycles (iteration or epoch) until network reachesspecified level of accuracy A complete round of forward-backward passes and weight adjustments using all input-output pairs in the data set is called an epoch or iteration
If a network is left to train for too long however it will beovertrained and will lose the ability to generalize
In this paper we focused on the training situation knownas supervised training in which a set of inputoutput datapatterns is available Thus the Ann has to be trained toproduce the desired output according to the examples
In order to perform a supervised training we need a wayof evaluating the Ann output error between the actual andthe expected outputs A popularmeasure is themean squarederror (MSE) or root mean squared error (RMSE) [19]
4 Description of the Method
In the proposed approach the model function is expressed asthe sum of two terms the first term satisfies the boundaryconditions (BCs) and contains no adjustable parametersThesecond term can be found by using FFNN which is trainedso as to satisfy the differential equation and such techniquecalled collocation neural network
In this section we will illustrate how our approach can beused to find the approximate solution of the general form a2nd-order TPSBVP
11990911989811991010158401015840
(119909) = 119865 (119909 119910 (119909) 1199101015840
(119909)) (2)
where a subject to certain BCs and 119898 isin 119885 119909 isin 119877 119863 sub 119877
denotes the domain and 119910(119909) is the solution to be computedIf 119910119905(119909 119901) denotes a trial solution with adjustable param-
eters 119901 the problem is transformed to a discretize form
Min119901
sum
119909119894isin
119865 (119909119894 119910119905(119909119894 119901) 119910
1015840
119905(119909119894 119901)) (3)
subject to the constraints imposed by the BCsIn the our proposed approach the trial solution 119910
119905
employs an FFNN and the parameters 119901 correspond to theweights and biases of the neural architecture We choose aform for the trial function 119910
119905(119909) such that it satisfies the BCs
This is achieved by writing it as a sum of two terms
119910119905(119909119894 119901) = 119860 (119909) + 119866 (119909119873 (119909 119901)) (4)
where 119873(119909 119901) is a single-output FFNN with parameters 119901and 119899 input units fed with the input vector 119909 The term 119860(119909)
contains no adjustable parameters and satisfies the BCs Thesecond term 119866 is constructed so as not to contribute to theBCs since 119910
119905(119909) satisfy them This term can be formed by
using an FFNN whose weights and biases are to be adjustedin order to deal with the minimization problem
An efficient minimization of (3) can be considered as aprocedure of training the FFNN where the error correspond-ing to each input 119909
119894is the value 119864(119909
119894) which has to forced
near zero Computation of this error value involves not onlythe FFNN output but also the derivatives of the output withrespect to any of its inputs
Therefore in computing the gradient of the error withrespect to the network weights consider a multilayer FFNNwith 119899 input units (where 119899 is the dimensions of the domain)one hidden layer with 119867 sigmoid nodes and a linear outputunit
ISRN Applied Mathematics 3
Table 1 Analytic and neural solutions of Example 1
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 100026931058869 10000002804467901 0995004165278026 0994992000703617 0995004165151447 099500301694241402 0980066577841242 0980066577841242 0980038331595444 098006820269738403 0955336489125606 0955336489125606 0955326539544390 095532771992101804 0921060994002885 0921060994002885 0921060993873107 092105776747184905 0877582561890373 0877582561890373 0877583333411869 087758750701520406 0825335614909678 0825335614909678 0825335614867241 082533239092903807 0764842187284489 0764838404395133 0764842187227892 076483134853576208 0696706709347165 0696656420761032 0696706709263878 069670827450662409 0621609968270664 0621454965504409 0621609968278774 062160889890821810 0540302305868140 0540302305868140 0540302305877365 0540302558954826
Table 2 Accuracy of solution for Example 1
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg Trainbr0 0000269310588686622 280446790679179119890 minus 07
121645744084464119890 minus 05 126578747483563119890 minus 10 114833561148942119890 minus 06
0 282462457977806119890 minus 05 162485614274566119890 minus 06
0 994958121636191119890 minus 06 876920458825481119890 minus 06
0 129778077173626119890 minus 10 322653103579373119890 minus 06
0 771521496467642119890 minus 07 494512483140142119890 minus 06
0 424377200047843119890 minus 11 322398064012130119890 minus 06
378288935598548119890 minus 06 565963942378289119890 minus 11 108387487263162119890 minus 05
502885861338731119890 minus 05 832875990397497119890 minus 11 156515945903823119890 minus 06
0000155002766255130 810940203876953119890 minus 12 106936244681499119890 minus 06
0 922539822312274119890 minus 12 253086685830795119890 minus 07
For a given input 119909 the output of the FFNN is
119873 =
119867
sum
119894=1
]119894120590 (119911119894) where 119911
119894=
119899
sum
119895=1
119908119894119895119909119895+ 119887119894 (5)
119908119894119895denotes the weight connecting the input unit 119895 to the
hidden unit 119894 ]119894denotes the weight connecting the hidden
unit 119894 to the output unit 119887119894denotes the bias of hidden unit 119894
and 120590(119911) is the sigmoid transfer function (tansig)The gradient of FFNN with respect to the parameters of
the FFNN can be easily obtained as
120597119873
120597]119894
= 120590 (119911119894)
120597119873
120597119887119894
= ]1198941205901015840(119911119894)
120597119873
120597119908119894119895
= ]1198941205901015840(119911119894) 119909119895
(6)
Once the derivative of the error with respect to thenetwork parameters has been defined then it is a straightforward to employ any minimization technique It must also
be noted that the batch mode of weight updates may beemployed
5 Illustration of the Method
In this section we describe solution of TPSBVP using FFNNTo illustrate the method we will consider the 2nd-order
TPSBVP
1199091198981198892119910 (119909)
1198891199092= 119891 (119909 119910 119910
1015840) (7)
where 119909 isin [119886 119887] and the BC 119910(119886) = 119860 119910(119887) = 119861 a trialsolution can be written as
119910119905(119909 119901) =
(119887119860 minus 119886119861)
(119887 minus 119886)+(119861 minus 119860) 119909
(119887 minus 119886)
+ (119909 minus 119886) (119909 minus 119887)119873 (119909 119901)
(8)
where119873(119909 119901) is the output of an FFNN with one input unitfor 119909 and weights 119901
4 ISRN Applied Mathematics
Table 3 The performance of the train with epoch and time for Example 1
TrainFcn Performance of train Epoch Time MSETrainlm 000 75 00001 21859119890 minus 009
Trainbfg 642119890 minus 21 6187 00423 66751119890 minus 009
Trainbr 588119890 minus 12 1861 00030 20236119890 minus 011
Note that 119910119905(119909) satisfies the BCby constructionThe error
quantity to be minimized is given by
119864 [119901] =
119899
sum
119894=1
1198892119910119905(119909119894 119901)
1198891199092minus 119891(119909
119894 119910119905(119909119894 119901)
119889119910119905(119909119894 119901)
119889119909)
2
(9)
where the 119909119894isin [119886 119887] Since
119889119910119905(119909 119901)
119889119909=(119861 minus 119860)
(119887 minus 119886)+ (119909 minus 119886) + (119909 minus 119887)119873 (119909 119901)
+ (119909 minus 119886) (119909 minus 119887)119889119873 (119909 )
119889119909
1198892119910119905(119909 119901)
1198891199092= 2119873 (119909 119901) + 2 (119909 minus 119886) + (119909 minus 119887)
119889119873 (119909 )
119889119909
+ (119909 minus 119886) (119909 minus 119887)1198892119873(119909 119901)
1198891199092
(10)
it is straightforward to compute the gradient of the error withrespect to the parameters 119901 using (6) The same holds for allsubsequent model problems
6 Example
In this section we report numerical result using a multi-layerFFNNhaving one hidden layer with 5 hidden units (neurons)and one linear output unit The sigmoid activation of eachhidden unit is tansig the analytic solution 119910
119886(119909) was known
in advance Therefore we test the accuracy of the obtainedsolutions by computing the deviation
Δ119910 (119909) =1003816100381610038161003816119910119905 (119909) minus 119910119886 (119909)
1003816100381610038161003816 (11)
In order to illustrate the characteristics of the solutionsprovided by the neural network method we provide figuresdisplaying the corresponding deviationΔ119910(119909) both at the fewpoints (training points) that were used for training and atmany other points (test points) of the domain of equationThe latter kind of figures are of major importance since theyshow the interpolation capabilities of the neural solutionwhich to be superior compared to other solution obtainedby using other methods Moreover we can consider pointsoutside the training interval in order to obtain an estimateof the extrapolation performance of the obtained numericalsolution
Example 1 Consider the following 2nd-order TPSBVP
11991010158401015840+ (
1
119909)1199101015840+ cos (119909) + sin (119909)
119909= 0 119909 isin [0 1] (12)
Table 4 Weight and bias of the network for different trainingalgorithm Example 1
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB102858 00759 0129907572 00540 0568807537 05308 0469403804 07792 0011905678 09340 03371
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB104068 08334 0260101126 04036 0086804438 03902 0429403002 03604 0257304014 01403 02976
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB101696 08803 0407502788 04711 0844501982 04040 0615301951 01792 0376603268 09689 08772
with BC 1199101015840(0) = 0 119910(1) = cos(1) The analytic solution is119910119886(119909) = cos(119909) according to (8) the trial neural form of the
solution is taken to be
119910119905(119909) = cos (1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (13)
TheFFNN trained using a grid of ten equidistant points in[0 1] Figure 1 displays the analytic and neural solutions withdifferent training algorithmThe neural results with differenttypes of training algorithm such as Levenberg-Marquardt(trainlm) quasi-Newton (trainbfg) and Bayesian Regulation(trainbr) introduced in Table 1 and its errors given in Table 2Table 3 gives the performance of the train with epoch andtime and Table 4 gives the weight and bias of the designernetwork
Ramos in [20] solved this example using1198621-linearizationmethod and gave the absolute error 4079613119890 minus 04 alsoKumar in [21] solved this example by the three-point finitedifference technique and gave the absolute error 44119890 minus 05
ISRN Applied Mathematics 5
Table 5 Analytic and neural solutions of Example 2
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 0999999999999781 099999999150488001 110517091807565 110514495970783 110517935005948 11051832743908802 122140275816017 122139291634314 122140560576571 12214030104771203 134985880757600 134985880757600 134985880757533 13498578984155804 149182469764127 149182469764127 149182469764283 14918259029177805 164872127070013 164872127070013 164872127069896 16487193405805406 182211880039051 182211880039051 182211880039100 18221166870299907 201375270747048 201374698451417 201375253599035 20137557787331708 222554092849247 222554092849247 222554092849241 22255387233872009 245960311115695 245965304884168 245961509099396 24596039601880010 271828182845905 271828182845905 271828182845889 271828168663973
Table 6 Accuracy of solutions for Example 2
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg trainbr0 219047002758543119890 minus 13 849512027389920119890 minus 09
259583678221542119890 minus 05 843198383004840119890 minus 06 123563152347739119890 minus 05
984181703400644119890 minus 06 284760554247754119890 minus 06 252316951554477119890 minus 07
0 674571509762245119890 minus 13 909160424944489119890 minus 07
0 155675472512939119890 minus 12 120527650837587119890 minus 06
0 116662235427611119890 minus 12 193011959059852119890 minus 06
222044604925031119890 minus 16 486721773995669119890 minus 13 211336051969546119890 minus 06
572295631107167119890 minus 06 171480126542889119890 minus 07 307126269616376119890 minus 06
0 572875080706581119890 minus 14 220510526371953119890 minus 06
499376847331590119890 minus 05 119798370104007119890 minus 05 849031051686211119890 minus 07
0 151878509768721119890 minus 13 141819320731429119890 minus 07
Example 2 Consider the following 2nd-order TPSBVP
11991010158401015840= minus [
(1 minus 2119909)
119909] 1199101015840minus [(119909 minus 1)
119909] 119910 119909 isin [0 1] (14)
with BC 119910(0) = 1 119910(1) = exp(1) and the analytic solution is119910119886(119909) = exp(119909) according to (8) the trial neural form of the
solution is
119910119905(119909) = 1 + (exp (1) minus 1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (15)
The FFNN trained using a grid of ten equidistant pointsin [0 1] Figure 2 displays the analytic and neural solutionswith different training algorithmsThe neural network resultswith different types of training algorithm such as trainlmtrainbfg and trainbr introduced in Table 5 and its errorsgiven in Table 6 Table 7 gives the performance of the train
0 01 02 03 04 05 06 07 08 09 105
06
07
08
09
1
11
12
yt
x
ya TrainbfgTrainbrTrainlm
Figure 1 Analytic and neural solutions of Example 1 using trainbfgtrainbr and trainlm
with epoch and time and Table 8 gives the weight and bias ofthe designer network
6 ISRN Applied Mathematics
Table 7 The performance of the train with epoch and time ofExample 2
TrainFcn Performance of train Epoch Time MSETrainlm 704119890 minus 33 1902 00030 26977119890 minus 010
Trainbfg 400119890 minus 28 2093 00104 18225119890 minus 011
Trainbr 243119890 minus 12 3481 00056 1458119890 minus 011
Table 8 Weight and bias of the network for different trainingalgorithm Example 2
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB109357 0 7406 0212204579 07437 0098502405 01059 0823607639 06816 0175007593 04633 01636
7 Conclusion
From the previous mentioned problems it is clear that theproposed network can be handle effectively TPSBVP andprovide accurate approximate solution throughout the wholedomain and not only at the training points As evident fromthe tables the results of proposed network are more preciseas compared to the method suggested in [20 21]
In general the practical results on FFNN show thatthe Levenberg-Marquardt algorithm (trainlm) will have thefastest convergence then trainbfg and then Bayesian Regula-tion (trainbr) However ldquotrainbrrdquo does not perform well onfunction approximation problems The performance of thevarious algorithms can be affected by the accuracy requiredof the approximation
0 01 02 03 04 05 06 07 08 09 1081
121416182
22242628
yt
x
ya TrainbfgTrainbrTrainlm
Figure 2 Analytic and neural solutions of Example 2 using differenttraining algorithms
References
[1] S Agatonovic-Kustrin and R Beresford ldquoBasic concepts ofartificial neural network (ANN) modeling and its applicationin pharmaceutical researchrdquo Journal of Pharmaceutical andBiomedical Analysis vol 22 no 5 pp 717ndash727 2000
[2] I E Lagaris A C Likas and D G Papageorgiou ldquoNeural-network methods for boundary value problems with irregularboundariesrdquo IEEE Transactions on Neural Networks vol 11 no5 pp 1041ndash1049 2000
[3] L N M Tawfiq Design and training artificial neural networksfor solving differential equations [PhD thesis] University ofBaghdad College of Education Ibn-Al-Haitham 2004
[4] A Malek and R Shekari Beidokhti ldquoNumerical solutionfor high order differential equations using a hybrid neu-ral networkmdashoptimization methodrdquo Applied Mathematics andComputation vol 183 no 1 pp 260ndash271 2006
[5] H Akca M H Al-Lail and V Covachev ldquoSurvey on wavelettransform and application in ODE and wavelet networksrdquoAdvances in Dynamical Systems and Applications vol 1 no 2pp 129ndash162 2006
[6] K S Mc Fall An artificial neural network method for solvingboundary value problems with arbitrary irregular boundaries[PhD thesis] Georgia Institute of Technology 2006
[7] A Junaid M A Z Raja and I M Qureshi ldquoEvolutionarycomputing approach for the solution of initial value problemsin ordinary differential equationsrdquo World Academy of ScienceEngineering and Technology vol 55 pp 578ndash5581 2009
[8] J Abdul Samath P S Kumar and A Begum ldquoSolution of linearelectrical circuit problem using neural networksrdquo InternationalJournal of Computer Applications vol 2 no 1 pp 6ndash13 2010
[9] K I Ibraheem and B M Khalaf ldquoShooting neural networksalgorithm for solving boundary value problems in ODEsrdquoApplications and Applied Mathematics vol 6 no 11 pp 1927ndash1941 2011
[10] S A Hoda I and H A Nagla ldquoOn neural network methodsfor mixed boundary value problemsrdquo International Journal ofNonlinear Science vol 11 no 3 pp 312ndash316 2011
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 ISRN Applied Mathematics
2 Singular Boundary Value Problem
The general form of the 2nd-order two-point boundary valueproblem (TPBVP) is
11991010158401015840+ 119875 (119909) 119910
1015840+ 119876 (119909) 119910 = 0 119886 le 119909 le 119887
119910 (119886) = 119860 119910 (119887) = 119861 where 119860 119861 isin 119877(1)
there are two types of a point 1199090isin [0 1] ordinary point and
singular pointA function 119910(119909) is analytic at 119909
0if it has a power series
expansion at 1199090that converges to 119910(119909) on an open interval
containing 1199090 A point 119909
0is an ordinary point of the ODE (1)
if the functions 119875(119909) and 119876(119909) are analytic at 1199090 Otherwise
1199090is a singular point of the ODE On the other hand if 119875(119909)
or 119876(119909) are not analytic at 1199090 then 119909
0is said to be a singular
point [14 15]There is at present no theoretical work justifying numer-
ical methods for solving problems with irregular singularpoints The main practical occurrence of such problemsseems to be semianalytic technique [16]
3 Artificial Neural Network
Ann is a simplified mathematical model of the humanbrain It can be implemented by both electric elements andcomputer software It is a parallel distributed processor withlarge numbers of connections it is an information processingsystem that has certain performance characters in commonwith biological neural networks [17]
The arriving signals called inputs multiplied by theconnection weights (adjusted) are first summed (combined)and then passed through a transfer function to produce theoutput for that neuronThe activation (transfer) function actson the weighted sum of the neuronrsquos inputs and the mostcommonly used transfer function is the sigmoid function(tansig) [13]
There are two main connection formulas (types) feed-back (recurrent) and feed forward connections Feedback isone type of connection where the output of one layer routesback to the input of a previous layer or to the same layer Feedforward neural network (FFNN) does not have a connectionback from the output to the input neurons [18]
There are many different training algorithms but themost often used training algorithm is the Delta rule or backpropagation (BP) rule A neural network is trained to mapa set of input data by iterative adjustment of the weightsInformation from inputs is fed forward through the networkto optimize the weights between neurons Optimization ofthe weights is made by backward propagation of the errorduring training phase
The Ann reads the input and output values in the trainingdata set and changes the value of the weighted links to reducethe difference between the predicted and target (observed)values The error in prediction is minimized across manytraining cycles (iteration or epoch) until network reachesspecified level of accuracy A complete round of forward-backward passes and weight adjustments using all input-output pairs in the data set is called an epoch or iteration
If a network is left to train for too long however it will beovertrained and will lose the ability to generalize
In this paper we focused on the training situation knownas supervised training in which a set of inputoutput datapatterns is available Thus the Ann has to be trained toproduce the desired output according to the examples
In order to perform a supervised training we need a wayof evaluating the Ann output error between the actual andthe expected outputs A popularmeasure is themean squarederror (MSE) or root mean squared error (RMSE) [19]
4 Description of the Method
In the proposed approach the model function is expressed asthe sum of two terms the first term satisfies the boundaryconditions (BCs) and contains no adjustable parametersThesecond term can be found by using FFNN which is trainedso as to satisfy the differential equation and such techniquecalled collocation neural network
In this section we will illustrate how our approach can beused to find the approximate solution of the general form a2nd-order TPSBVP
11990911989811991010158401015840
(119909) = 119865 (119909 119910 (119909) 1199101015840
(119909)) (2)
where a subject to certain BCs and 119898 isin 119885 119909 isin 119877 119863 sub 119877
denotes the domain and 119910(119909) is the solution to be computedIf 119910119905(119909 119901) denotes a trial solution with adjustable param-
eters 119901 the problem is transformed to a discretize form
Min119901
sum
119909119894isin
119865 (119909119894 119910119905(119909119894 119901) 119910
1015840
119905(119909119894 119901)) (3)
subject to the constraints imposed by the BCsIn the our proposed approach the trial solution 119910
119905
employs an FFNN and the parameters 119901 correspond to theweights and biases of the neural architecture We choose aform for the trial function 119910
119905(119909) such that it satisfies the BCs
This is achieved by writing it as a sum of two terms
119910119905(119909119894 119901) = 119860 (119909) + 119866 (119909119873 (119909 119901)) (4)
where 119873(119909 119901) is a single-output FFNN with parameters 119901and 119899 input units fed with the input vector 119909 The term 119860(119909)
contains no adjustable parameters and satisfies the BCs Thesecond term 119866 is constructed so as not to contribute to theBCs since 119910
119905(119909) satisfy them This term can be formed by
using an FFNN whose weights and biases are to be adjustedin order to deal with the minimization problem
An efficient minimization of (3) can be considered as aprocedure of training the FFNN where the error correspond-ing to each input 119909
119894is the value 119864(119909
119894) which has to forced
near zero Computation of this error value involves not onlythe FFNN output but also the derivatives of the output withrespect to any of its inputs
Therefore in computing the gradient of the error withrespect to the network weights consider a multilayer FFNNwith 119899 input units (where 119899 is the dimensions of the domain)one hidden layer with 119867 sigmoid nodes and a linear outputunit
ISRN Applied Mathematics 3
Table 1 Analytic and neural solutions of Example 1
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 100026931058869 10000002804467901 0995004165278026 0994992000703617 0995004165151447 099500301694241402 0980066577841242 0980066577841242 0980038331595444 098006820269738403 0955336489125606 0955336489125606 0955326539544390 095532771992101804 0921060994002885 0921060994002885 0921060993873107 092105776747184905 0877582561890373 0877582561890373 0877583333411869 087758750701520406 0825335614909678 0825335614909678 0825335614867241 082533239092903807 0764842187284489 0764838404395133 0764842187227892 076483134853576208 0696706709347165 0696656420761032 0696706709263878 069670827450662409 0621609968270664 0621454965504409 0621609968278774 062160889890821810 0540302305868140 0540302305868140 0540302305877365 0540302558954826
Table 2 Accuracy of solution for Example 1
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg Trainbr0 0000269310588686622 280446790679179119890 minus 07
121645744084464119890 minus 05 126578747483563119890 minus 10 114833561148942119890 minus 06
0 282462457977806119890 minus 05 162485614274566119890 minus 06
0 994958121636191119890 minus 06 876920458825481119890 minus 06
0 129778077173626119890 minus 10 322653103579373119890 minus 06
0 771521496467642119890 minus 07 494512483140142119890 minus 06
0 424377200047843119890 minus 11 322398064012130119890 minus 06
378288935598548119890 minus 06 565963942378289119890 minus 11 108387487263162119890 minus 05
502885861338731119890 minus 05 832875990397497119890 minus 11 156515945903823119890 minus 06
0000155002766255130 810940203876953119890 minus 12 106936244681499119890 minus 06
0 922539822312274119890 minus 12 253086685830795119890 minus 07
For a given input 119909 the output of the FFNN is
119873 =
119867
sum
119894=1
]119894120590 (119911119894) where 119911
119894=
119899
sum
119895=1
119908119894119895119909119895+ 119887119894 (5)
119908119894119895denotes the weight connecting the input unit 119895 to the
hidden unit 119894 ]119894denotes the weight connecting the hidden
unit 119894 to the output unit 119887119894denotes the bias of hidden unit 119894
and 120590(119911) is the sigmoid transfer function (tansig)The gradient of FFNN with respect to the parameters of
the FFNN can be easily obtained as
120597119873
120597]119894
= 120590 (119911119894)
120597119873
120597119887119894
= ]1198941205901015840(119911119894)
120597119873
120597119908119894119895
= ]1198941205901015840(119911119894) 119909119895
(6)
Once the derivative of the error with respect to thenetwork parameters has been defined then it is a straightforward to employ any minimization technique It must also
be noted that the batch mode of weight updates may beemployed
5 Illustration of the Method
In this section we describe solution of TPSBVP using FFNNTo illustrate the method we will consider the 2nd-order
TPSBVP
1199091198981198892119910 (119909)
1198891199092= 119891 (119909 119910 119910
1015840) (7)
where 119909 isin [119886 119887] and the BC 119910(119886) = 119860 119910(119887) = 119861 a trialsolution can be written as
119910119905(119909 119901) =
(119887119860 minus 119886119861)
(119887 minus 119886)+(119861 minus 119860) 119909
(119887 minus 119886)
+ (119909 minus 119886) (119909 minus 119887)119873 (119909 119901)
(8)
where119873(119909 119901) is the output of an FFNN with one input unitfor 119909 and weights 119901
4 ISRN Applied Mathematics
Table 3 The performance of the train with epoch and time for Example 1
TrainFcn Performance of train Epoch Time MSETrainlm 000 75 00001 21859119890 minus 009
Trainbfg 642119890 minus 21 6187 00423 66751119890 minus 009
Trainbr 588119890 minus 12 1861 00030 20236119890 minus 011
Note that 119910119905(119909) satisfies the BCby constructionThe error
quantity to be minimized is given by
119864 [119901] =
119899
sum
119894=1
1198892119910119905(119909119894 119901)
1198891199092minus 119891(119909
119894 119910119905(119909119894 119901)
119889119910119905(119909119894 119901)
119889119909)
2
(9)
where the 119909119894isin [119886 119887] Since
119889119910119905(119909 119901)
119889119909=(119861 minus 119860)
(119887 minus 119886)+ (119909 minus 119886) + (119909 minus 119887)119873 (119909 119901)
+ (119909 minus 119886) (119909 minus 119887)119889119873 (119909 )
119889119909
1198892119910119905(119909 119901)
1198891199092= 2119873 (119909 119901) + 2 (119909 minus 119886) + (119909 minus 119887)
119889119873 (119909 )
119889119909
+ (119909 minus 119886) (119909 minus 119887)1198892119873(119909 119901)
1198891199092
(10)
it is straightforward to compute the gradient of the error withrespect to the parameters 119901 using (6) The same holds for allsubsequent model problems
6 Example
In this section we report numerical result using a multi-layerFFNNhaving one hidden layer with 5 hidden units (neurons)and one linear output unit The sigmoid activation of eachhidden unit is tansig the analytic solution 119910
119886(119909) was known
in advance Therefore we test the accuracy of the obtainedsolutions by computing the deviation
Δ119910 (119909) =1003816100381610038161003816119910119905 (119909) minus 119910119886 (119909)
1003816100381610038161003816 (11)
In order to illustrate the characteristics of the solutionsprovided by the neural network method we provide figuresdisplaying the corresponding deviationΔ119910(119909) both at the fewpoints (training points) that were used for training and atmany other points (test points) of the domain of equationThe latter kind of figures are of major importance since theyshow the interpolation capabilities of the neural solutionwhich to be superior compared to other solution obtainedby using other methods Moreover we can consider pointsoutside the training interval in order to obtain an estimateof the extrapolation performance of the obtained numericalsolution
Example 1 Consider the following 2nd-order TPSBVP
11991010158401015840+ (
1
119909)1199101015840+ cos (119909) + sin (119909)
119909= 0 119909 isin [0 1] (12)
Table 4 Weight and bias of the network for different trainingalgorithm Example 1
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB102858 00759 0129907572 00540 0568807537 05308 0469403804 07792 0011905678 09340 03371
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB104068 08334 0260101126 04036 0086804438 03902 0429403002 03604 0257304014 01403 02976
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB101696 08803 0407502788 04711 0844501982 04040 0615301951 01792 0376603268 09689 08772
with BC 1199101015840(0) = 0 119910(1) = cos(1) The analytic solution is119910119886(119909) = cos(119909) according to (8) the trial neural form of the
solution is taken to be
119910119905(119909) = cos (1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (13)
TheFFNN trained using a grid of ten equidistant points in[0 1] Figure 1 displays the analytic and neural solutions withdifferent training algorithmThe neural results with differenttypes of training algorithm such as Levenberg-Marquardt(trainlm) quasi-Newton (trainbfg) and Bayesian Regulation(trainbr) introduced in Table 1 and its errors given in Table 2Table 3 gives the performance of the train with epoch andtime and Table 4 gives the weight and bias of the designernetwork
Ramos in [20] solved this example using1198621-linearizationmethod and gave the absolute error 4079613119890 minus 04 alsoKumar in [21] solved this example by the three-point finitedifference technique and gave the absolute error 44119890 minus 05
ISRN Applied Mathematics 5
Table 5 Analytic and neural solutions of Example 2
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 0999999999999781 099999999150488001 110517091807565 110514495970783 110517935005948 11051832743908802 122140275816017 122139291634314 122140560576571 12214030104771203 134985880757600 134985880757600 134985880757533 13498578984155804 149182469764127 149182469764127 149182469764283 14918259029177805 164872127070013 164872127070013 164872127069896 16487193405805406 182211880039051 182211880039051 182211880039100 18221166870299907 201375270747048 201374698451417 201375253599035 20137557787331708 222554092849247 222554092849247 222554092849241 22255387233872009 245960311115695 245965304884168 245961509099396 24596039601880010 271828182845905 271828182845905 271828182845889 271828168663973
Table 6 Accuracy of solutions for Example 2
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg trainbr0 219047002758543119890 minus 13 849512027389920119890 minus 09
259583678221542119890 minus 05 843198383004840119890 minus 06 123563152347739119890 minus 05
984181703400644119890 minus 06 284760554247754119890 minus 06 252316951554477119890 minus 07
0 674571509762245119890 minus 13 909160424944489119890 minus 07
0 155675472512939119890 minus 12 120527650837587119890 minus 06
0 116662235427611119890 minus 12 193011959059852119890 minus 06
222044604925031119890 minus 16 486721773995669119890 minus 13 211336051969546119890 minus 06
572295631107167119890 minus 06 171480126542889119890 minus 07 307126269616376119890 minus 06
0 572875080706581119890 minus 14 220510526371953119890 minus 06
499376847331590119890 minus 05 119798370104007119890 minus 05 849031051686211119890 minus 07
0 151878509768721119890 minus 13 141819320731429119890 minus 07
Example 2 Consider the following 2nd-order TPSBVP
11991010158401015840= minus [
(1 minus 2119909)
119909] 1199101015840minus [(119909 minus 1)
119909] 119910 119909 isin [0 1] (14)
with BC 119910(0) = 1 119910(1) = exp(1) and the analytic solution is119910119886(119909) = exp(119909) according to (8) the trial neural form of the
solution is
119910119905(119909) = 1 + (exp (1) minus 1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (15)
The FFNN trained using a grid of ten equidistant pointsin [0 1] Figure 2 displays the analytic and neural solutionswith different training algorithmsThe neural network resultswith different types of training algorithm such as trainlmtrainbfg and trainbr introduced in Table 5 and its errorsgiven in Table 6 Table 7 gives the performance of the train
0 01 02 03 04 05 06 07 08 09 105
06
07
08
09
1
11
12
yt
x
ya TrainbfgTrainbrTrainlm
Figure 1 Analytic and neural solutions of Example 1 using trainbfgtrainbr and trainlm
with epoch and time and Table 8 gives the weight and bias ofthe designer network
6 ISRN Applied Mathematics
Table 7 The performance of the train with epoch and time ofExample 2
TrainFcn Performance of train Epoch Time MSETrainlm 704119890 minus 33 1902 00030 26977119890 minus 010
Trainbfg 400119890 minus 28 2093 00104 18225119890 minus 011
Trainbr 243119890 minus 12 3481 00056 1458119890 minus 011
Table 8 Weight and bias of the network for different trainingalgorithm Example 2
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB109357 0 7406 0212204579 07437 0098502405 01059 0823607639 06816 0175007593 04633 01636
7 Conclusion
From the previous mentioned problems it is clear that theproposed network can be handle effectively TPSBVP andprovide accurate approximate solution throughout the wholedomain and not only at the training points As evident fromthe tables the results of proposed network are more preciseas compared to the method suggested in [20 21]
In general the practical results on FFNN show thatthe Levenberg-Marquardt algorithm (trainlm) will have thefastest convergence then trainbfg and then Bayesian Regula-tion (trainbr) However ldquotrainbrrdquo does not perform well onfunction approximation problems The performance of thevarious algorithms can be affected by the accuracy requiredof the approximation
0 01 02 03 04 05 06 07 08 09 1081
121416182
22242628
yt
x
ya TrainbfgTrainbrTrainlm
Figure 2 Analytic and neural solutions of Example 2 using differenttraining algorithms
References
[1] S Agatonovic-Kustrin and R Beresford ldquoBasic concepts ofartificial neural network (ANN) modeling and its applicationin pharmaceutical researchrdquo Journal of Pharmaceutical andBiomedical Analysis vol 22 no 5 pp 717ndash727 2000
[2] I E Lagaris A C Likas and D G Papageorgiou ldquoNeural-network methods for boundary value problems with irregularboundariesrdquo IEEE Transactions on Neural Networks vol 11 no5 pp 1041ndash1049 2000
[3] L N M Tawfiq Design and training artificial neural networksfor solving differential equations [PhD thesis] University ofBaghdad College of Education Ibn-Al-Haitham 2004
[4] A Malek and R Shekari Beidokhti ldquoNumerical solutionfor high order differential equations using a hybrid neu-ral networkmdashoptimization methodrdquo Applied Mathematics andComputation vol 183 no 1 pp 260ndash271 2006
[5] H Akca M H Al-Lail and V Covachev ldquoSurvey on wavelettransform and application in ODE and wavelet networksrdquoAdvances in Dynamical Systems and Applications vol 1 no 2pp 129ndash162 2006
[6] K S Mc Fall An artificial neural network method for solvingboundary value problems with arbitrary irregular boundaries[PhD thesis] Georgia Institute of Technology 2006
[7] A Junaid M A Z Raja and I M Qureshi ldquoEvolutionarycomputing approach for the solution of initial value problemsin ordinary differential equationsrdquo World Academy of ScienceEngineering and Technology vol 55 pp 578ndash5581 2009
[8] J Abdul Samath P S Kumar and A Begum ldquoSolution of linearelectrical circuit problem using neural networksrdquo InternationalJournal of Computer Applications vol 2 no 1 pp 6ndash13 2010
[9] K I Ibraheem and B M Khalaf ldquoShooting neural networksalgorithm for solving boundary value problems in ODEsrdquoApplications and Applied Mathematics vol 6 no 11 pp 1927ndash1941 2011
[10] S A Hoda I and H A Nagla ldquoOn neural network methodsfor mixed boundary value problemsrdquo International Journal ofNonlinear Science vol 11 no 3 pp 312ndash316 2011
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
ISRN Applied Mathematics 3
Table 1 Analytic and neural solutions of Example 1
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 100026931058869 10000002804467901 0995004165278026 0994992000703617 0995004165151447 099500301694241402 0980066577841242 0980066577841242 0980038331595444 098006820269738403 0955336489125606 0955336489125606 0955326539544390 095532771992101804 0921060994002885 0921060994002885 0921060993873107 092105776747184905 0877582561890373 0877582561890373 0877583333411869 087758750701520406 0825335614909678 0825335614909678 0825335614867241 082533239092903807 0764842187284489 0764838404395133 0764842187227892 076483134853576208 0696706709347165 0696656420761032 0696706709263878 069670827450662409 0621609968270664 0621454965504409 0621609968278774 062160889890821810 0540302305868140 0540302305868140 0540302305877365 0540302558954826
Table 2 Accuracy of solution for Example 1
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg Trainbr0 0000269310588686622 280446790679179119890 minus 07
121645744084464119890 minus 05 126578747483563119890 minus 10 114833561148942119890 minus 06
0 282462457977806119890 minus 05 162485614274566119890 minus 06
0 994958121636191119890 minus 06 876920458825481119890 minus 06
0 129778077173626119890 minus 10 322653103579373119890 minus 06
0 771521496467642119890 minus 07 494512483140142119890 minus 06
0 424377200047843119890 minus 11 322398064012130119890 minus 06
378288935598548119890 minus 06 565963942378289119890 minus 11 108387487263162119890 minus 05
502885861338731119890 minus 05 832875990397497119890 minus 11 156515945903823119890 minus 06
0000155002766255130 810940203876953119890 minus 12 106936244681499119890 minus 06
0 922539822312274119890 minus 12 253086685830795119890 minus 07
For a given input 119909 the output of the FFNN is
119873 =
119867
sum
119894=1
]119894120590 (119911119894) where 119911
119894=
119899
sum
119895=1
119908119894119895119909119895+ 119887119894 (5)
119908119894119895denotes the weight connecting the input unit 119895 to the
hidden unit 119894 ]119894denotes the weight connecting the hidden
unit 119894 to the output unit 119887119894denotes the bias of hidden unit 119894
and 120590(119911) is the sigmoid transfer function (tansig)The gradient of FFNN with respect to the parameters of
the FFNN can be easily obtained as
120597119873
120597]119894
= 120590 (119911119894)
120597119873
120597119887119894
= ]1198941205901015840(119911119894)
120597119873
120597119908119894119895
= ]1198941205901015840(119911119894) 119909119895
(6)
Once the derivative of the error with respect to thenetwork parameters has been defined then it is a straightforward to employ any minimization technique It must also
be noted that the batch mode of weight updates may beemployed
5 Illustration of the Method
In this section we describe solution of TPSBVP using FFNNTo illustrate the method we will consider the 2nd-order
TPSBVP
1199091198981198892119910 (119909)
1198891199092= 119891 (119909 119910 119910
1015840) (7)
where 119909 isin [119886 119887] and the BC 119910(119886) = 119860 119910(119887) = 119861 a trialsolution can be written as
119910119905(119909 119901) =
(119887119860 minus 119886119861)
(119887 minus 119886)+(119861 minus 119860) 119909
(119887 minus 119886)
+ (119909 minus 119886) (119909 minus 119887)119873 (119909 119901)
(8)
where119873(119909 119901) is the output of an FFNN with one input unitfor 119909 and weights 119901
4 ISRN Applied Mathematics
Table 3 The performance of the train with epoch and time for Example 1
TrainFcn Performance of train Epoch Time MSETrainlm 000 75 00001 21859119890 minus 009
Trainbfg 642119890 minus 21 6187 00423 66751119890 minus 009
Trainbr 588119890 minus 12 1861 00030 20236119890 minus 011
Note that 119910119905(119909) satisfies the BCby constructionThe error
quantity to be minimized is given by
119864 [119901] =
119899
sum
119894=1
1198892119910119905(119909119894 119901)
1198891199092minus 119891(119909
119894 119910119905(119909119894 119901)
119889119910119905(119909119894 119901)
119889119909)
2
(9)
where the 119909119894isin [119886 119887] Since
119889119910119905(119909 119901)
119889119909=(119861 minus 119860)
(119887 minus 119886)+ (119909 minus 119886) + (119909 minus 119887)119873 (119909 119901)
+ (119909 minus 119886) (119909 minus 119887)119889119873 (119909 )
119889119909
1198892119910119905(119909 119901)
1198891199092= 2119873 (119909 119901) + 2 (119909 minus 119886) + (119909 minus 119887)
119889119873 (119909 )
119889119909
+ (119909 minus 119886) (119909 minus 119887)1198892119873(119909 119901)
1198891199092
(10)
it is straightforward to compute the gradient of the error withrespect to the parameters 119901 using (6) The same holds for allsubsequent model problems
6 Example
In this section we report numerical result using a multi-layerFFNNhaving one hidden layer with 5 hidden units (neurons)and one linear output unit The sigmoid activation of eachhidden unit is tansig the analytic solution 119910
119886(119909) was known
in advance Therefore we test the accuracy of the obtainedsolutions by computing the deviation
Δ119910 (119909) =1003816100381610038161003816119910119905 (119909) minus 119910119886 (119909)
1003816100381610038161003816 (11)
In order to illustrate the characteristics of the solutionsprovided by the neural network method we provide figuresdisplaying the corresponding deviationΔ119910(119909) both at the fewpoints (training points) that were used for training and atmany other points (test points) of the domain of equationThe latter kind of figures are of major importance since theyshow the interpolation capabilities of the neural solutionwhich to be superior compared to other solution obtainedby using other methods Moreover we can consider pointsoutside the training interval in order to obtain an estimateof the extrapolation performance of the obtained numericalsolution
Example 1 Consider the following 2nd-order TPSBVP
11991010158401015840+ (
1
119909)1199101015840+ cos (119909) + sin (119909)
119909= 0 119909 isin [0 1] (12)
Table 4 Weight and bias of the network for different trainingalgorithm Example 1
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB102858 00759 0129907572 00540 0568807537 05308 0469403804 07792 0011905678 09340 03371
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB104068 08334 0260101126 04036 0086804438 03902 0429403002 03604 0257304014 01403 02976
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB101696 08803 0407502788 04711 0844501982 04040 0615301951 01792 0376603268 09689 08772
with BC 1199101015840(0) = 0 119910(1) = cos(1) The analytic solution is119910119886(119909) = cos(119909) according to (8) the trial neural form of the
solution is taken to be
119910119905(119909) = cos (1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (13)
TheFFNN trained using a grid of ten equidistant points in[0 1] Figure 1 displays the analytic and neural solutions withdifferent training algorithmThe neural results with differenttypes of training algorithm such as Levenberg-Marquardt(trainlm) quasi-Newton (trainbfg) and Bayesian Regulation(trainbr) introduced in Table 1 and its errors given in Table 2Table 3 gives the performance of the train with epoch andtime and Table 4 gives the weight and bias of the designernetwork
Ramos in [20] solved this example using1198621-linearizationmethod and gave the absolute error 4079613119890 minus 04 alsoKumar in [21] solved this example by the three-point finitedifference technique and gave the absolute error 44119890 minus 05
ISRN Applied Mathematics 5
Table 5 Analytic and neural solutions of Example 2
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 0999999999999781 099999999150488001 110517091807565 110514495970783 110517935005948 11051832743908802 122140275816017 122139291634314 122140560576571 12214030104771203 134985880757600 134985880757600 134985880757533 13498578984155804 149182469764127 149182469764127 149182469764283 14918259029177805 164872127070013 164872127070013 164872127069896 16487193405805406 182211880039051 182211880039051 182211880039100 18221166870299907 201375270747048 201374698451417 201375253599035 20137557787331708 222554092849247 222554092849247 222554092849241 22255387233872009 245960311115695 245965304884168 245961509099396 24596039601880010 271828182845905 271828182845905 271828182845889 271828168663973
Table 6 Accuracy of solutions for Example 2
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg trainbr0 219047002758543119890 minus 13 849512027389920119890 minus 09
259583678221542119890 minus 05 843198383004840119890 minus 06 123563152347739119890 minus 05
984181703400644119890 minus 06 284760554247754119890 minus 06 252316951554477119890 minus 07
0 674571509762245119890 minus 13 909160424944489119890 minus 07
0 155675472512939119890 minus 12 120527650837587119890 minus 06
0 116662235427611119890 minus 12 193011959059852119890 minus 06
222044604925031119890 minus 16 486721773995669119890 minus 13 211336051969546119890 minus 06
572295631107167119890 minus 06 171480126542889119890 minus 07 307126269616376119890 minus 06
0 572875080706581119890 minus 14 220510526371953119890 minus 06
499376847331590119890 minus 05 119798370104007119890 minus 05 849031051686211119890 minus 07
0 151878509768721119890 minus 13 141819320731429119890 minus 07
Example 2 Consider the following 2nd-order TPSBVP
11991010158401015840= minus [
(1 minus 2119909)
119909] 1199101015840minus [(119909 minus 1)
119909] 119910 119909 isin [0 1] (14)
with BC 119910(0) = 1 119910(1) = exp(1) and the analytic solution is119910119886(119909) = exp(119909) according to (8) the trial neural form of the
solution is
119910119905(119909) = 1 + (exp (1) minus 1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (15)
The FFNN trained using a grid of ten equidistant pointsin [0 1] Figure 2 displays the analytic and neural solutionswith different training algorithmsThe neural network resultswith different types of training algorithm such as trainlmtrainbfg and trainbr introduced in Table 5 and its errorsgiven in Table 6 Table 7 gives the performance of the train
0 01 02 03 04 05 06 07 08 09 105
06
07
08
09
1
11
12
yt
x
ya TrainbfgTrainbrTrainlm
Figure 1 Analytic and neural solutions of Example 1 using trainbfgtrainbr and trainlm
with epoch and time and Table 8 gives the weight and bias ofthe designer network
6 ISRN Applied Mathematics
Table 7 The performance of the train with epoch and time ofExample 2
TrainFcn Performance of train Epoch Time MSETrainlm 704119890 minus 33 1902 00030 26977119890 minus 010
Trainbfg 400119890 minus 28 2093 00104 18225119890 minus 011
Trainbr 243119890 minus 12 3481 00056 1458119890 minus 011
Table 8 Weight and bias of the network for different trainingalgorithm Example 2
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB109357 0 7406 0212204579 07437 0098502405 01059 0823607639 06816 0175007593 04633 01636
7 Conclusion
From the previous mentioned problems it is clear that theproposed network can be handle effectively TPSBVP andprovide accurate approximate solution throughout the wholedomain and not only at the training points As evident fromthe tables the results of proposed network are more preciseas compared to the method suggested in [20 21]
In general the practical results on FFNN show thatthe Levenberg-Marquardt algorithm (trainlm) will have thefastest convergence then trainbfg and then Bayesian Regula-tion (trainbr) However ldquotrainbrrdquo does not perform well onfunction approximation problems The performance of thevarious algorithms can be affected by the accuracy requiredof the approximation
0 01 02 03 04 05 06 07 08 09 1081
121416182
22242628
yt
x
ya TrainbfgTrainbrTrainlm
Figure 2 Analytic and neural solutions of Example 2 using differenttraining algorithms
References
[1] S Agatonovic-Kustrin and R Beresford ldquoBasic concepts ofartificial neural network (ANN) modeling and its applicationin pharmaceutical researchrdquo Journal of Pharmaceutical andBiomedical Analysis vol 22 no 5 pp 717ndash727 2000
[2] I E Lagaris A C Likas and D G Papageorgiou ldquoNeural-network methods for boundary value problems with irregularboundariesrdquo IEEE Transactions on Neural Networks vol 11 no5 pp 1041ndash1049 2000
[3] L N M Tawfiq Design and training artificial neural networksfor solving differential equations [PhD thesis] University ofBaghdad College of Education Ibn-Al-Haitham 2004
[4] A Malek and R Shekari Beidokhti ldquoNumerical solutionfor high order differential equations using a hybrid neu-ral networkmdashoptimization methodrdquo Applied Mathematics andComputation vol 183 no 1 pp 260ndash271 2006
[5] H Akca M H Al-Lail and V Covachev ldquoSurvey on wavelettransform and application in ODE and wavelet networksrdquoAdvances in Dynamical Systems and Applications vol 1 no 2pp 129ndash162 2006
[6] K S Mc Fall An artificial neural network method for solvingboundary value problems with arbitrary irregular boundaries[PhD thesis] Georgia Institute of Technology 2006
[7] A Junaid M A Z Raja and I M Qureshi ldquoEvolutionarycomputing approach for the solution of initial value problemsin ordinary differential equationsrdquo World Academy of ScienceEngineering and Technology vol 55 pp 578ndash5581 2009
[8] J Abdul Samath P S Kumar and A Begum ldquoSolution of linearelectrical circuit problem using neural networksrdquo InternationalJournal of Computer Applications vol 2 no 1 pp 6ndash13 2010
[9] K I Ibraheem and B M Khalaf ldquoShooting neural networksalgorithm for solving boundary value problems in ODEsrdquoApplications and Applied Mathematics vol 6 no 11 pp 1927ndash1941 2011
[10] S A Hoda I and H A Nagla ldquoOn neural network methodsfor mixed boundary value problemsrdquo International Journal ofNonlinear Science vol 11 no 3 pp 312ndash316 2011
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 ISRN Applied Mathematics
Table 3 The performance of the train with epoch and time for Example 1
TrainFcn Performance of train Epoch Time MSETrainlm 000 75 00001 21859119890 minus 009
Trainbfg 642119890 minus 21 6187 00423 66751119890 minus 009
Trainbr 588119890 minus 12 1861 00030 20236119890 minus 011
Note that 119910119905(119909) satisfies the BCby constructionThe error
quantity to be minimized is given by
119864 [119901] =
119899
sum
119894=1
1198892119910119905(119909119894 119901)
1198891199092minus 119891(119909
119894 119910119905(119909119894 119901)
119889119910119905(119909119894 119901)
119889119909)
2
(9)
where the 119909119894isin [119886 119887] Since
119889119910119905(119909 119901)
119889119909=(119861 minus 119860)
(119887 minus 119886)+ (119909 minus 119886) + (119909 minus 119887)119873 (119909 119901)
+ (119909 minus 119886) (119909 minus 119887)119889119873 (119909 )
119889119909
1198892119910119905(119909 119901)
1198891199092= 2119873 (119909 119901) + 2 (119909 minus 119886) + (119909 minus 119887)
119889119873 (119909 )
119889119909
+ (119909 minus 119886) (119909 minus 119887)1198892119873(119909 119901)
1198891199092
(10)
it is straightforward to compute the gradient of the error withrespect to the parameters 119901 using (6) The same holds for allsubsequent model problems
6 Example
In this section we report numerical result using a multi-layerFFNNhaving one hidden layer with 5 hidden units (neurons)and one linear output unit The sigmoid activation of eachhidden unit is tansig the analytic solution 119910
119886(119909) was known
in advance Therefore we test the accuracy of the obtainedsolutions by computing the deviation
Δ119910 (119909) =1003816100381610038161003816119910119905 (119909) minus 119910119886 (119909)
1003816100381610038161003816 (11)
In order to illustrate the characteristics of the solutionsprovided by the neural network method we provide figuresdisplaying the corresponding deviationΔ119910(119909) both at the fewpoints (training points) that were used for training and atmany other points (test points) of the domain of equationThe latter kind of figures are of major importance since theyshow the interpolation capabilities of the neural solutionwhich to be superior compared to other solution obtainedby using other methods Moreover we can consider pointsoutside the training interval in order to obtain an estimateof the extrapolation performance of the obtained numericalsolution
Example 1 Consider the following 2nd-order TPSBVP
11991010158401015840+ (
1
119909)1199101015840+ cos (119909) + sin (119909)
119909= 0 119909 isin [0 1] (12)
Table 4 Weight and bias of the network for different trainingalgorithm Example 1
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB102858 00759 0129907572 00540 0568807537 05308 0469403804 07792 0011905678 09340 03371
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB104068 08334 0260101126 04036 0086804438 03902 0429403002 03604 0257304014 01403 02976
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB101696 08803 0407502788 04711 0844501982 04040 0615301951 01792 0376603268 09689 08772
with BC 1199101015840(0) = 0 119910(1) = cos(1) The analytic solution is119910119886(119909) = cos(119909) according to (8) the trial neural form of the
solution is taken to be
119910119905(119909) = cos (1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (13)
TheFFNN trained using a grid of ten equidistant points in[0 1] Figure 1 displays the analytic and neural solutions withdifferent training algorithmThe neural results with differenttypes of training algorithm such as Levenberg-Marquardt(trainlm) quasi-Newton (trainbfg) and Bayesian Regulation(trainbr) introduced in Table 1 and its errors given in Table 2Table 3 gives the performance of the train with epoch andtime and Table 4 gives the weight and bias of the designernetwork
Ramos in [20] solved this example using1198621-linearizationmethod and gave the absolute error 4079613119890 minus 04 alsoKumar in [21] solved this example by the three-point finitedifference technique and gave the absolute error 44119890 minus 05
ISRN Applied Mathematics 5
Table 5 Analytic and neural solutions of Example 2
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 0999999999999781 099999999150488001 110517091807565 110514495970783 110517935005948 11051832743908802 122140275816017 122139291634314 122140560576571 12214030104771203 134985880757600 134985880757600 134985880757533 13498578984155804 149182469764127 149182469764127 149182469764283 14918259029177805 164872127070013 164872127070013 164872127069896 16487193405805406 182211880039051 182211880039051 182211880039100 18221166870299907 201375270747048 201374698451417 201375253599035 20137557787331708 222554092849247 222554092849247 222554092849241 22255387233872009 245960311115695 245965304884168 245961509099396 24596039601880010 271828182845905 271828182845905 271828182845889 271828168663973
Table 6 Accuracy of solutions for Example 2
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg trainbr0 219047002758543119890 minus 13 849512027389920119890 minus 09
259583678221542119890 minus 05 843198383004840119890 minus 06 123563152347739119890 minus 05
984181703400644119890 minus 06 284760554247754119890 minus 06 252316951554477119890 minus 07
0 674571509762245119890 minus 13 909160424944489119890 minus 07
0 155675472512939119890 minus 12 120527650837587119890 minus 06
0 116662235427611119890 minus 12 193011959059852119890 minus 06
222044604925031119890 minus 16 486721773995669119890 minus 13 211336051969546119890 minus 06
572295631107167119890 minus 06 171480126542889119890 minus 07 307126269616376119890 minus 06
0 572875080706581119890 minus 14 220510526371953119890 minus 06
499376847331590119890 minus 05 119798370104007119890 minus 05 849031051686211119890 minus 07
0 151878509768721119890 minus 13 141819320731429119890 minus 07
Example 2 Consider the following 2nd-order TPSBVP
11991010158401015840= minus [
(1 minus 2119909)
119909] 1199101015840minus [(119909 minus 1)
119909] 119910 119909 isin [0 1] (14)
with BC 119910(0) = 1 119910(1) = exp(1) and the analytic solution is119910119886(119909) = exp(119909) according to (8) the trial neural form of the
solution is
119910119905(119909) = 1 + (exp (1) minus 1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (15)
The FFNN trained using a grid of ten equidistant pointsin [0 1] Figure 2 displays the analytic and neural solutionswith different training algorithmsThe neural network resultswith different types of training algorithm such as trainlmtrainbfg and trainbr introduced in Table 5 and its errorsgiven in Table 6 Table 7 gives the performance of the train
0 01 02 03 04 05 06 07 08 09 105
06
07
08
09
1
11
12
yt
x
ya TrainbfgTrainbrTrainlm
Figure 1 Analytic and neural solutions of Example 1 using trainbfgtrainbr and trainlm
with epoch and time and Table 8 gives the weight and bias ofthe designer network
6 ISRN Applied Mathematics
Table 7 The performance of the train with epoch and time ofExample 2
TrainFcn Performance of train Epoch Time MSETrainlm 704119890 minus 33 1902 00030 26977119890 minus 010
Trainbfg 400119890 minus 28 2093 00104 18225119890 minus 011
Trainbr 243119890 minus 12 3481 00056 1458119890 minus 011
Table 8 Weight and bias of the network for different trainingalgorithm Example 2
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB109357 0 7406 0212204579 07437 0098502405 01059 0823607639 06816 0175007593 04633 01636
7 Conclusion
From the previous mentioned problems it is clear that theproposed network can be handle effectively TPSBVP andprovide accurate approximate solution throughout the wholedomain and not only at the training points As evident fromthe tables the results of proposed network are more preciseas compared to the method suggested in [20 21]
In general the practical results on FFNN show thatthe Levenberg-Marquardt algorithm (trainlm) will have thefastest convergence then trainbfg and then Bayesian Regula-tion (trainbr) However ldquotrainbrrdquo does not perform well onfunction approximation problems The performance of thevarious algorithms can be affected by the accuracy requiredof the approximation
0 01 02 03 04 05 06 07 08 09 1081
121416182
22242628
yt
x
ya TrainbfgTrainbrTrainlm
Figure 2 Analytic and neural solutions of Example 2 using differenttraining algorithms
References
[1] S Agatonovic-Kustrin and R Beresford ldquoBasic concepts ofartificial neural network (ANN) modeling and its applicationin pharmaceutical researchrdquo Journal of Pharmaceutical andBiomedical Analysis vol 22 no 5 pp 717ndash727 2000
[2] I E Lagaris A C Likas and D G Papageorgiou ldquoNeural-network methods for boundary value problems with irregularboundariesrdquo IEEE Transactions on Neural Networks vol 11 no5 pp 1041ndash1049 2000
[3] L N M Tawfiq Design and training artificial neural networksfor solving differential equations [PhD thesis] University ofBaghdad College of Education Ibn-Al-Haitham 2004
[4] A Malek and R Shekari Beidokhti ldquoNumerical solutionfor high order differential equations using a hybrid neu-ral networkmdashoptimization methodrdquo Applied Mathematics andComputation vol 183 no 1 pp 260ndash271 2006
[5] H Akca M H Al-Lail and V Covachev ldquoSurvey on wavelettransform and application in ODE and wavelet networksrdquoAdvances in Dynamical Systems and Applications vol 1 no 2pp 129ndash162 2006
[6] K S Mc Fall An artificial neural network method for solvingboundary value problems with arbitrary irregular boundaries[PhD thesis] Georgia Institute of Technology 2006
[7] A Junaid M A Z Raja and I M Qureshi ldquoEvolutionarycomputing approach for the solution of initial value problemsin ordinary differential equationsrdquo World Academy of ScienceEngineering and Technology vol 55 pp 578ndash5581 2009
[8] J Abdul Samath P S Kumar and A Begum ldquoSolution of linearelectrical circuit problem using neural networksrdquo InternationalJournal of Computer Applications vol 2 no 1 pp 6ndash13 2010
[9] K I Ibraheem and B M Khalaf ldquoShooting neural networksalgorithm for solving boundary value problems in ODEsrdquoApplications and Applied Mathematics vol 6 no 11 pp 1927ndash1941 2011
[10] S A Hoda I and H A Nagla ldquoOn neural network methodsfor mixed boundary value problemsrdquo International Journal ofNonlinear Science vol 11 no 3 pp 312ndash316 2011
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
ISRN Applied Mathematics 5
Table 5 Analytic and neural solutions of Example 2
Input Analytic solution Out of suggested FFNN 119910119905(119909) for different training algorithms
119909 119910119886(119909) Trainlm Trainbfg Trainbr
00 1 1 0999999999999781 099999999150488001 110517091807565 110514495970783 110517935005948 11051832743908802 122140275816017 122139291634314 122140560576571 12214030104771203 134985880757600 134985880757600 134985880757533 13498578984155804 149182469764127 149182469764127 149182469764283 14918259029177805 164872127070013 164872127070013 164872127069896 16487193405805406 182211880039051 182211880039051 182211880039100 18221166870299907 201375270747048 201374698451417 201375253599035 20137557787331708 222554092849247 222554092849247 222554092849241 22255387233872009 245960311115695 245965304884168 245961509099396 24596039601880010 271828182845905 271828182845905 271828182845889 271828168663973
Table 6 Accuracy of solutions for Example 2
The error 119864(119909) = |119910119905(119909) minus 119910
119886(119909)| where 119910
119905(119909) computed by the following training algorithm
Trainlm Trainbfg trainbr0 219047002758543119890 minus 13 849512027389920119890 minus 09
259583678221542119890 minus 05 843198383004840119890 minus 06 123563152347739119890 minus 05
984181703400644119890 minus 06 284760554247754119890 minus 06 252316951554477119890 minus 07
0 674571509762245119890 minus 13 909160424944489119890 minus 07
0 155675472512939119890 minus 12 120527650837587119890 minus 06
0 116662235427611119890 minus 12 193011959059852119890 minus 06
222044604925031119890 minus 16 486721773995669119890 minus 13 211336051969546119890 minus 06
572295631107167119890 minus 06 171480126542889119890 minus 07 307126269616376119890 minus 06
0 572875080706581119890 minus 14 220510526371953119890 minus 06
499376847331590119890 minus 05 119798370104007119890 minus 05 849031051686211119890 minus 07
0 151878509768721119890 minus 13 141819320731429119890 minus 07
Example 2 Consider the following 2nd-order TPSBVP
11991010158401015840= minus [
(1 minus 2119909)
119909] 1199101015840minus [(119909 minus 1)
119909] 119910 119909 isin [0 1] (14)
with BC 119910(0) = 1 119910(1) = exp(1) and the analytic solution is119910119886(119909) = exp(119909) according to (8) the trial neural form of the
solution is
119910119905(119909) = 1 + (exp (1) minus 1) 119909 + 119909 (119909 minus 1)119873 (119909 119901) (15)
The FFNN trained using a grid of ten equidistant pointsin [0 1] Figure 2 displays the analytic and neural solutionswith different training algorithmsThe neural network resultswith different types of training algorithm such as trainlmtrainbfg and trainbr introduced in Table 5 and its errorsgiven in Table 6 Table 7 gives the performance of the train
0 01 02 03 04 05 06 07 08 09 105
06
07
08
09
1
11
12
yt
x
ya TrainbfgTrainbrTrainlm
Figure 1 Analytic and neural solutions of Example 1 using trainbfgtrainbr and trainlm
with epoch and time and Table 8 gives the weight and bias ofthe designer network
6 ISRN Applied Mathematics
Table 7 The performance of the train with epoch and time ofExample 2
TrainFcn Performance of train Epoch Time MSETrainlm 704119890 minus 33 1902 00030 26977119890 minus 010
Trainbfg 400119890 minus 28 2093 00104 18225119890 minus 011
Trainbr 243119890 minus 12 3481 00056 1458119890 minus 011
Table 8 Weight and bias of the network for different trainingalgorithm Example 2
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB109357 0 7406 0212204579 07437 0098502405 01059 0823607639 06816 0175007593 04633 01636
7 Conclusion
From the previous mentioned problems it is clear that theproposed network can be handle effectively TPSBVP andprovide accurate approximate solution throughout the wholedomain and not only at the training points As evident fromthe tables the results of proposed network are more preciseas compared to the method suggested in [20 21]
In general the practical results on FFNN show thatthe Levenberg-Marquardt algorithm (trainlm) will have thefastest convergence then trainbfg and then Bayesian Regula-tion (trainbr) However ldquotrainbrrdquo does not perform well onfunction approximation problems The performance of thevarious algorithms can be affected by the accuracy requiredof the approximation
0 01 02 03 04 05 06 07 08 09 1081
121416182
22242628
yt
x
ya TrainbfgTrainbrTrainlm
Figure 2 Analytic and neural solutions of Example 2 using differenttraining algorithms
References
[1] S Agatonovic-Kustrin and R Beresford ldquoBasic concepts ofartificial neural network (ANN) modeling and its applicationin pharmaceutical researchrdquo Journal of Pharmaceutical andBiomedical Analysis vol 22 no 5 pp 717ndash727 2000
[2] I E Lagaris A C Likas and D G Papageorgiou ldquoNeural-network methods for boundary value problems with irregularboundariesrdquo IEEE Transactions on Neural Networks vol 11 no5 pp 1041ndash1049 2000
[3] L N M Tawfiq Design and training artificial neural networksfor solving differential equations [PhD thesis] University ofBaghdad College of Education Ibn-Al-Haitham 2004
[4] A Malek and R Shekari Beidokhti ldquoNumerical solutionfor high order differential equations using a hybrid neu-ral networkmdashoptimization methodrdquo Applied Mathematics andComputation vol 183 no 1 pp 260ndash271 2006
[5] H Akca M H Al-Lail and V Covachev ldquoSurvey on wavelettransform and application in ODE and wavelet networksrdquoAdvances in Dynamical Systems and Applications vol 1 no 2pp 129ndash162 2006
[6] K S Mc Fall An artificial neural network method for solvingboundary value problems with arbitrary irregular boundaries[PhD thesis] Georgia Institute of Technology 2006
[7] A Junaid M A Z Raja and I M Qureshi ldquoEvolutionarycomputing approach for the solution of initial value problemsin ordinary differential equationsrdquo World Academy of ScienceEngineering and Technology vol 55 pp 578ndash5581 2009
[8] J Abdul Samath P S Kumar and A Begum ldquoSolution of linearelectrical circuit problem using neural networksrdquo InternationalJournal of Computer Applications vol 2 no 1 pp 6ndash13 2010
[9] K I Ibraheem and B M Khalaf ldquoShooting neural networksalgorithm for solving boundary value problems in ODEsrdquoApplications and Applied Mathematics vol 6 no 11 pp 1927ndash1941 2011
[10] S A Hoda I and H A Nagla ldquoOn neural network methodsfor mixed boundary value problemsrdquo International Journal ofNonlinear Science vol 11 no 3 pp 312ndash316 2011
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 ISRN Applied Mathematics
Table 7 The performance of the train with epoch and time ofExample 2
TrainFcn Performance of train Epoch Time MSETrainlm 704119890 minus 33 1902 00030 26977119890 minus 010
Trainbfg 400119890 minus 28 2093 00104 18225119890 minus 011
Trainbr 243119890 minus 12 3481 00056 1458119890 minus 011
Table 8 Weight and bias of the network for different trainingalgorithm Example 2
(a)
Weights and bias for trainlmNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(b)
Weights and bias for trainbfgNetIW1 1 NetLW2 1 NetB107094 01626 0585307547 01190 0223802760 04984 0751306797 09597 0255106551 03404 05060
(c)
Weights and bias for trainbrNetIW1 1 NetLW2 1 NetB109357 0 7406 0212204579 07437 0098502405 01059 0823607639 06816 0175007593 04633 01636
7 Conclusion
From the previous mentioned problems it is clear that theproposed network can be handle effectively TPSBVP andprovide accurate approximate solution throughout the wholedomain and not only at the training points As evident fromthe tables the results of proposed network are more preciseas compared to the method suggested in [20 21]
In general the practical results on FFNN show thatthe Levenberg-Marquardt algorithm (trainlm) will have thefastest convergence then trainbfg and then Bayesian Regula-tion (trainbr) However ldquotrainbrrdquo does not perform well onfunction approximation problems The performance of thevarious algorithms can be affected by the accuracy requiredof the approximation
0 01 02 03 04 05 06 07 08 09 1081
121416182
22242628
yt
x
ya TrainbfgTrainbrTrainlm
Figure 2 Analytic and neural solutions of Example 2 using differenttraining algorithms
References
[1] S Agatonovic-Kustrin and R Beresford ldquoBasic concepts ofartificial neural network (ANN) modeling and its applicationin pharmaceutical researchrdquo Journal of Pharmaceutical andBiomedical Analysis vol 22 no 5 pp 717ndash727 2000
[2] I E Lagaris A C Likas and D G Papageorgiou ldquoNeural-network methods for boundary value problems with irregularboundariesrdquo IEEE Transactions on Neural Networks vol 11 no5 pp 1041ndash1049 2000
[3] L N M Tawfiq Design and training artificial neural networksfor solving differential equations [PhD thesis] University ofBaghdad College of Education Ibn-Al-Haitham 2004
[4] A Malek and R Shekari Beidokhti ldquoNumerical solutionfor high order differential equations using a hybrid neu-ral networkmdashoptimization methodrdquo Applied Mathematics andComputation vol 183 no 1 pp 260ndash271 2006
[5] H Akca M H Al-Lail and V Covachev ldquoSurvey on wavelettransform and application in ODE and wavelet networksrdquoAdvances in Dynamical Systems and Applications vol 1 no 2pp 129ndash162 2006
[6] K S Mc Fall An artificial neural network method for solvingboundary value problems with arbitrary irregular boundaries[PhD thesis] Georgia Institute of Technology 2006
[7] A Junaid M A Z Raja and I M Qureshi ldquoEvolutionarycomputing approach for the solution of initial value problemsin ordinary differential equationsrdquo World Academy of ScienceEngineering and Technology vol 55 pp 578ndash5581 2009
[8] J Abdul Samath P S Kumar and A Begum ldquoSolution of linearelectrical circuit problem using neural networksrdquo InternationalJournal of Computer Applications vol 2 no 1 pp 6ndash13 2010
[9] K I Ibraheem and B M Khalaf ldquoShooting neural networksalgorithm for solving boundary value problems in ODEsrdquoApplications and Applied Mathematics vol 6 no 11 pp 1927ndash1941 2011
[10] S A Hoda I and H A Nagla ldquoOn neural network methodsfor mixed boundary value problemsrdquo International Journal ofNonlinear Science vol 11 no 3 pp 312ndash316 2011
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
ISRN Applied Mathematics 7
[11] K Majidzadeh ldquoInverse problem with respect to domain andartificial neural network algorithm for the solutionrdquoMathemat-ical Problems in Engineering vol 2011 Article ID 145608 16pages 2011
[12] Y A Oraibi Design feed forward neural networks for solvingordinary initial value problem [MS thesis] University of Bagh-dad College of Education Ibn Al-Haitham 2011
[13] M H Ali Design fast feed forward neural networks to solvetwo point boundary value problems [MS thesis] University ofBaghdad College of Education Ibn Al-Haitham 2012
[14] I Rachunkova S Stanek andM Tvrdy Solvability of NonlinearSingular Problems for Ordinary Differential Equations HindawiPublishing Corporation New York USA 2008
[15] L F Shampine J Kierzenka and M W Reichelt ldquoSolvingBoundary Value Problems for Ordinary Differential Equationsin Matlab with bvp4crdquo 2000
[16] H W Rasheed Efficient semi-analytic technique for solvingsecond order singular ordinary boundary value problems [MSthesis] University of Baghdad College of Education Ibn-Al-Haitham 2011
[17] A I Galushkin Neural Networks Theory Springer BerlinGermany 2007
[18] K Mehrotra C K Mohan and S Ranka Elements of ArtificialNeural Networks Springer New York NY USA 1996
[19] A Ghaffari H Abdollahi M R Khoshayand I S BozchalooiA Dadgar and M Rafiee-Tehrani ldquoPerformance comparisonof neural network training algorithms in modeling of bimodaldrug deliveryrdquo International Journal of Pharmaceutics vol 327no 1-2 pp 126ndash138 2006
[20] J I Ramos ldquoPiecewise quasilinearization techniques for singu-lar boundary-value problemsrdquo Computer Physics Communica-tions vol 158 no 1 pp 12ndash25 2004
[21] M Kumar ldquoA three-point finite difference method for a classof singular two-point boundary value problemsrdquo Journal ofComputational and Applied Mathematics vol 145 no 1 pp 89ndash97 2002
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of