Eigen Solution of Neural Networks and Its Application in ...

22
Research Article Eigen Solution of Neural Networks and Its Application in Prediction and Analysis of Controller Parameters of Grinding Robot in Complex Environments Shixi Tang, 1,2 Jinan Gu , 1 Keming Tang, 2 Wei Ding, 1 and Zhengyang Shang 1 1 Mechanical Information Research Center, Jiangsu University, Zhenjiang, Jiangsu 212013, China 2 School of Information Engineering, Yancheng Teachers University, Yancheng, Jiangsu 224002, China Correspondence should be addressed to Jinan Gu; [email protected] Received 11 May 2018; Revised 27 August 2018; Accepted 10 September 2018; Published 3 January 2019 Guest Editor: Andy Annamalai Copyright © 2019 Shixi Tang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The robot dynamic model is often rarely known due to various uncertainties such as parametric uncertainties or modeling errors existing in complex environments. It is a key problem to nd the relationship between the changes of neural network structure and the changes of input and output environments and their mutual inuences. Firstly, this paper dened the conceptions of neural network solution, neural network eigen solution, neural network complete solution, and neural network partial solution and the conceptions of input environments, output environments, and macrostructure of neural networks. Secondly, an eigen solution theory of general neural networks was proposed and proven including consistent approximation theorem, eigen solution existence theorem, consistency theorem of complete solution, the partial solution, and none solution theorem of neural networks. Lastly, to verify the eigen solution theory of neural networks, the proposed theory was applied to a novel prediction and analysis model of controller parameters of grinding robot in complex environments with deep neural networks and then build prediction model with deep learning neural networks for controller parameters of grinding robot. The morphological subfeature graph with multimoment was constructed to describe the block surface morphology using rugosity, standard deviation, skewness, and kurtosis. The results of theoretical analysis and experimental test show that the output traits have an optional eect with joint action. When the input features functioning in prediction increase, higher predicted accuracy can be obtained. And when the output traits involving in prediction increase, more output traits can be predicted. The proposed prediction and analysis model with deep neural networks can be used to nd and predict the inherent laws of the data. Compared with the traditional prediction model, the proposed model can predict output features simultaneously and is more stable. 1. Introduction Grinding is one of the most environmental-related manufacturing processes in complex environments. These processes are often scattered, inecient, contaminated, labor-intensive, and foremost eld-intensive. And all the processes account for 1215% of overall manufacturing costs and 2030% of total manufacturing time [1]. The surface roughness of the casting block and the grinding force pre- diction are an important aspect of the grinding process for optimization and monitoring. The grinding process is con- trolled by the grinding robot servo control system. Neural networks can be helpful in reducing the data dimensionality, and the optimization of neural network training may be employed to enhance the learning and adaptation perfor- mance of robots [2]. With its powerful approximation ability, neural network has been utilized in many promising elds, such as modeling and identication of complex nonlinear systems and optimization and automatic control. The com- ponents integrated with a complex system may interact with each other and bring diculties to the control [3]. So many grinding robot servo control systems are constructed with neural networks. Adaptive neural network controllers were studied in many aspects. The approximation-based controllers are designed for induction motors with input saturation [4] Hindawi Complexity Volume 2019, Article ID 5296123, 21 pages https://doi.org/10.1155/2019/5296123

Transcript of Eigen Solution of Neural Networks and Its Application in ...

Page 1: Eigen Solution of Neural Networks and Its Application in ...

Research ArticleEigen Solution of Neural Networks and Its Application inPrediction and Analysis of Controller Parameters of GrindingRobot in Complex Environments

Shixi Tang,1,2 Jinan Gu ,1 Keming Tang,2 Wei Ding,1 and Zhengyang Shang 1

1Mechanical Information Research Center, Jiangsu University, Zhenjiang, Jiangsu 212013, China2School of Information Engineering, Yancheng Teachers University, Yancheng, Jiangsu 224002, China

Correspondence should be addressed to Jinan Gu; [email protected]

Received 11 May 2018; Revised 27 August 2018; Accepted 10 September 2018; Published 3 January 2019

Guest Editor: Andy Annamalai

Copyright © 2019 Shixi Tang et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The robot dynamic model is often rarely known due to various uncertainties such as parametric uncertainties or modeling errorsexisting in complex environments. It is a key problem to find the relationship between the changes of neural network structureand the changes of input and output environments and their mutual influences. Firstly, this paper defined the conceptions ofneural network solution, neural network eigen solution, neural network complete solution, and neural network partial solutionand the conceptions of input environments, output environments, and macrostructure of neural networks. Secondly, an eigensolution theory of general neural networks was proposed and proven including consistent approximation theorem, eigensolution existence theorem, consistency theorem of complete solution, the partial solution, and none solution theorem of neuralnetworks. Lastly, to verify the eigen solution theory of neural networks, the proposed theory was applied to a novel predictionand analysis model of controller parameters of grinding robot in complex environments with deep neural networks and thenbuild prediction model with deep learning neural networks for controller parameters of grinding robot. The morphologicalsubfeature graph with multimoment was constructed to describe the block surface morphology using rugosity, standarddeviation, skewness, and kurtosis. The results of theoretical analysis and experimental test show that the output traits have anoptional effect with joint action. When the input features functioning in prediction increase, higher predicted accuracy can beobtained. And when the output traits involving in prediction increase, more output traits can be predicted. The proposedprediction and analysis model with deep neural networks can be used to find and predict the inherent laws of the data.Compared with the traditional predictionmodel, the proposedmodel can predict output features simultaneously and is more stable.

1. Introduction

Grinding is one of the most environmental-relatedmanufacturing processes in complex environments. Theseprocesses are often scattered, inefficient, contaminated,labor-intensive, and foremost field-intensive. And all theprocesses account for 12–15% of overall manufacturing costsand 20–30% of total manufacturing time [1]. The surfaceroughness of the casting block and the grinding force pre-diction are an important aspect of the grinding process foroptimization and monitoring. The grinding process is con-trolled by the grinding robot servo control system. Neuralnetworks can be helpful in reducing the data dimensionality,

and the optimization of neural network training may beemployed to enhance the learning and adaptation perfor-mance of robots [2]. With its powerful approximation ability,neural network has been utilized in many promising fields,such as modeling and identification of complex nonlinearsystems and optimization and automatic control. The com-ponents integrated with a complex system may interact witheach other and bring difficulties to the control [3]. So manygrinding robot servo control systems are constructed withneural networks.

Adaptive neural network controllers were studied inmany aspects. The approximation-based controllers aredesigned for induction motors with input saturation [4]

HindawiComplexityVolume 2019, Article ID 5296123, 21 pageshttps://doi.org/10.1155/2019/5296123

Page 2: Eigen Solution of Neural Networks and Its Application in ...

and 3-DOF robotic manipulator that is subject to backlashlike hysteresis and friction [5]. The parameter-based con-trollers are designed to identify the unknown robot kine-matic and dynamic parameters for robot manipulatorswith finite-time convergence [6] and perform haptic identi-fication for uncertain robot manipulators [7]. The predictivecontrollers are designed for behaviour predicting of elec-tronic circuits [8], trajectory tracking [9], trajectory trackingof underactuated surface vessels [10], performing attitudetracking control of a quad tilt rotor aircraft [11], and track-ing error converging to a neighborhood of the origin [12]. Afederal Kalman filter based on neural networks is used in thevelocity and attitude matching of transfer alignment [13].An iterative neural dynamic programming is provided foraffine and nonaffine nonlinear systems by using system datarather than accurate system models [14]. We have the meritof adaptive neural network controllers in our work.

Different controller frameworks of neural networksare constructed for different nonlinear systems. A generalframework of the nonlinear recurrent neural network wasproposed for solving the online generalized linear matrixequation with global convergence property [15]. Anotherframework, which combines modified frequency slice wave-let transform and convolutional neural networks, was pro-posed for automatic atrial fibrillation beat identification[16]. A stability criterion of impulsive stochastic reaction-diffusion cellular neural network framework was derivedvia fixed-point theory [17]. The subnetwork of a class ofcoupled fractional-order neural networks consisting of Nidentical subnetworks have r + 1 n locally Mittag-Lefflerstable equilibria [18]. Our work was implemented based onthese frameworks.

The concept of convolutional neural networks madethe framework implementation of multilayer neural net-works possible [19]. A pretraining method was proposedfor hidden layer node weights using unsupervised restrictedBoltzmannmachine to solve the overfitting problem in gradi-ent descending of multilayer neural networks [20]. A Drop-out strategy was proposed to provide a high-efficiencytraining method of deep neural network weights with nonsa-turating neurons, and this method can prevent overfittingand improve accuracy [21]. A reinforcement learning theorywas introduced into deep learning neural networks toimprove the reasoning ability of deep learning neural net-works [22]. Based on these, the above developments weresystematically described and were formally named as deeplearning [23]. The improved variants of deep learning theory[24] also appeared. Deep learning theory has been widelyused because of its high efficiency and high accuracy [25].The Go program (AlphaGo) was constructed successfullyusing the deep learning neural networks and successfullydefeated the human Go experts [26]. Depth learning theoryhas reached satisfactory success in the adaptive outputfeedback control of unmanned aircraft [27], image qualityassessment [28], image character recognition under naturalenvironment [29], change detection of image [30], imageclassification [31], and robotic guidance [32]. A semiactivenonsmooth control method with deep learning was proposedto suppress harmful effect on building structure by surface

motion [33]. So the deep neural networks could be used incontroller to process massive amounts of unsupervised datain complex scenarios. Neural networks have been widelystudied and applied because of their ability to solve complexlinear indivisibility problems, especially their super deeplearning ability, which has significant advantages in cross-domain big data analysis with unstructured and unidentifiedpatterns. Further, the conditions of existence and globalexponential synchronization of almost periodic solutions ofthe delayed quaternion-valued neural networks were investi-gated [34], and the function projective synchronizationproblem of neural networks with mixed time-varying delaysand uncertainties asymmetric coupling was investigated[35]. Neural networks have made great achievements inthe determination of the success or failure and improvingthe performance.

But the robot dynamic model is often rarely known dueto the complex robot mechanism, let alone various uncer-tainties such as parametric uncertainties or modeling errorsexisting in the robot dynamics. Therefore, it is a key problemto find the relationship between the changes of neuralnetwork structure and the changes of input and outputenvironments to find their mutual influence. What are therelationships between the changes of neural network struc-ture and the changes of input and output environments?How do they have mutual influence? These are the funda-mental credit assignment problems [36]. It is a hotspot inthe current research on the interaction with environmentsof neural networks, especially in input and output datainteraction with neural networks [37]. There exist problemsto be solved: (1) the quantitative description of input envi-ronments, macrostructure of neural networks, and outputenvironments; (2) the quantitative description of the rela-tionship between input environments, macrostructure ofneural networks, and output environments; and (3) experi-mental verification of the relationship between input envi-ronments, macrostructure of neural networks, and outputenvironments. Solution to these problems provides supportfor the wider application of neural networks.

In order to solve these problems, our work focuses on theeigen solution of neural networks of robot in complexenvironments, including kinematic model parameter uncer-tainties, dynamic model parameter uncertainties, and exter-nal disturbances by directly mapping the morphologicalsubfeatures with controller parameters of the grinding robot.The concept and the eigen solution theorem of neuralnetworks were studied in the second part of the paper. Thethird part gave the application of the neural network eigensolution rule in prediction analysis of controller parametersof the grinding robot to verify the theory correctness from apractical point of view. In the last part of the paper, wesummarized our research work. The main contributions inthis paper include the following.

(1) In order to describe the relationship between thechange of neural networks, the change of input envi-ronments, and the change of output environments,this paper constructed the neural network eigensolution theory, which defined the conception of

2 Complexity

Page 3: Eigen Solution of Neural Networks and Its Application in ...

neural networks, neural network solution, neural net-work eigen solution, complete solution, and partialsolution of neural networks. Further, this paper pro-posed the consistent approximation theorem, eigensolution existence theorem, consistency theorem ofcomplete solution and partial solution, and nosolution theorem of neural networks

(2) In order to verify the eigen solution theory of neuralnetworks, the morphological subfeature graph withmultimoment was constructed to describe the blocksurface morphology using rugosity Mij,1 , standarddeviation Mij,2 , skewness Mij,3 , and kurtosisMij,4 and then build prediction model with deeplearning neural networks for controller parametersof the grinding robot. The predicted accuracies ofthe proposed model are more than 95%. Comparedwith the traditional prediction model, the proposedmodel can predict output features simultaneouslyand is more stable

(3) The experimental results verify the correctness ofthe neural network eigen solution theory. Theresults of theoretical analysis and experimental testshowed that the output traits have an optionaleffect with joint action. When the input featuresfunctioning in prediction increase, higher predictedaccuracy can be obtained. And when the outputtraits involve in prediction increase, more outputtraits can be predicted. The deep learning modelcan find and predict the inherent laws of the data.Moreover, if the analyzed object data are gained inrandom, there is no nonrandom feature. The deeplearning controller provides random predicted result.The output accuracies of random-output experi-ments are between 65% and 70% but not 50% dueto superb affection

2. Solution of Neural Networks

In our study, we try to find the essential relationshipbetween the changes of neural network structure and thechanges of input and output environments and theirmutual influence out of the kinematic model parameteruncertainties, dynamic model parameter uncertainties, andexternal disturbances. We first defined the neural networksfrom the input and output environments’ perspective, andconsistent approximation theorem is proposed with thisconcept. Then, the concept of solution of neural networkswas defined based on this concept, and the correspondingconcepts of isomorphism solution, eigen solution, completesolution, and partial solution were defined too. With theseconcepts and theorem, the eigen solution existence theo-rem, the consistency theorem of complete solution andpartial solution, and no solution theorem were proposed.The relation of definitions and theorems were describedin detail (see Figure 1).

Definition 1 Neural networks. The neural networks aredefined as Input,Net,Output (see Figure 2). Input = Siis the input object set.Net= {Wl,k,ij, bl} are the network layers(l > 0, i > 0, j > 0), and Wl,k,ij is the k-th convolutional kernel[i○j] of the l-th layer. If the network does not use convolu-tion, let k = 1, then i is the i-th element in the (l − 1)-th layer,j is the j-th element in the l-th layer, and Wl,ij is the weightbetween the i-th element and the j-th element. bl is bias inthe l-th layer. Output= {Ti} is the output target vector set.ψ is the feature acquisition function. f l is the neural networkfunction. f is the target function corresponding to the targetvector set {Ti}.

Definition 2 Solution of neural networks. In the study, {Nettr}is a trained neural network, {Inputtest} is a given test set, and{Outputtest} is the output target vector set corresponding to

Definition 1neural networks

Definition 2solution of neural networks

Definition 3isomorphismsolution and

eigen solution ofneural networks

Definition 4complete solution

and partial solution ofneural networks

�eorem 1 consistentapproximation

theorem

Consistentapproximation

theorem offeedforward

neural networks[GB Huang 2006]

�eorem 2eigen solution

existencetheorem

�eorem 3consistencytheorem ofcomplete

solution andpartial solution

�eorem 4no solution

theorem

Figure 1: The relation of definitions and theorems.

3Complexity

Page 4: Eigen Solution of Neural Networks and Its Application in ...

the test set for the trained neural networks. Let {Outputgiven}be the target vector, then Error is given as

Error =Max ei,p = Ti,test − Ti,goven p, Si,test,p , 1

where p is the given number of test sets. Suppose that theminimum threshold is Threshold, if the Error is higher thanthe threshold, then the trained neural networks cannot meetthe actual needs.

If Error<Threshold, then {Nettr} is called a solution ofneural networks {Input, Net, Output}.

If Error=0, {Nettr} is called an ideal solution or atheoretical solution of neural networks {Input,Net,Output}.

If 0<Error<Threshold, {Nettr} is called an approximatesolution of neural networks {Input, Net, Output}.

If Error> 0, we call that the neural networks {Input,Net,Output} have no solution.

Definition 3 Isomorphism solution and eigen solution ofneural networks. The neural networks {Wl,k,ij, bl} aretrained independently using m different sets of inputobject vector sets Si r to obtain m different solutionsNettr r , r = 1,… ,m. All these solutions are called isomor-phism solutions of neural networks {Wl,k,ij, bl} for each other.

Let the input object vector set Si 1 have the solution{Nettr}1 corresponding to the neural networks {Wl,k,ij, bl}.

Si′ r is given as

Si′r= ψ Si 1 , r = 1,… , 2

where ψ is the eigen acquisition function. The original inputobject vector set Si 1 is transformed into the feature dataobject vector set Si′ r . If Si′ r has the solution Nettr r ,solution Nettr r is called eigen solution of {Nettr}1. Theeigen solutions Nettr r and {Nettr}1 are the isomorphismsolutions for each other. In the study, we focus on the neuralnetwork eigen solution.

Definition 4 Complete solution and partial solution of neuralnetworks. With the output target vector set {Ti}, i ∈ 1,Max ,if the output target vector set includes all the target vectorsTi Max, then {Nettr}Max is called the complete solution ofneural networks {Wl,k,ij, bl}. If the output target vector set is

part of the target vector Ti Part, then {Nettr}Part is calledthe partial solution of neural networks {Wl,k,ij, bl}.

Theorem 1 Consistent approximation theorem. First, letthe neural networks contain l layers; each layer has n hiddenlayer nodes; each activation function gi,j of the hidden layernode is a bounded continuous segmentation function, thenthe function of the neural networks are f l as

f l = 〠l

i=1〠n

j

bi,jgi,j 3

Let the output objective vector set {Ti} have the corre-sponding objective function f . The residual function elbetween the neural networks and the output target vector isdefined as

el = f − f l 4

If ʃRg x dx ≠ 0, f and f l are continuous; the limit of el isgiven as

limn→∞or l→∞ el = 0 5

That is, el converges and monotonically declines.

Proof

(1) If the neural networks are feedforward neuralnetworks and the hidden layer number is l = 1, thenTheorem 1 is proofed true with the consistentapproximation theorem of feedforward neural net-works proposed in [38]

(2) If the neural networks are feedforward neural net-works and the hidden layer number is l ≠ 1, then f lis given as

f l = 〠l

i=1〠n

j

bi,jgi,j = 〠n∗l

j

bi,jgi,j = 〠n′

j

bi,jgi,j 6

Then, the converted problem is consistent with theconsistent approximation theorem of feedforward

fl (Net = {Wl,k,ij, bl})�휓 (Input = {Si}) f (Output = {Ti})

……

……

Figure 2: Basic structure of neural networks.

4 Complexity

Page 5: Eigen Solution of Neural Networks and Its Application in ...

neural networks proposed in [39]; hence, Theorem 1is true

(3) If the neural networks are recurrent neural networksNET, the feedback structure of recurrent neuralnetworks is expanded in time dimension. Let thenetwork NET run from time t0; every time step ofnetwork NET is expanded into one of the layers inthe feedforward neural networksNET∗, and the layerhas exactly the same activity value with time stepnetwork NET. For any τ ≥ t0, the connection weightwij between the neuron or external input j andneuron i of network NET is copied into wij τ + 1between the neuron or external input j in the τ-thlayer and neuron i in the τ + 1 -th layer of networkNET∗ (see Figure 3)

As time goes by, the depth ofNET∗ can be infinitely deep,that is, the recurrent neural networks NET can be equivalentto the feedforward neural networks when n→∞, and this isexactly the same with the consistent approximation theoremof feedforward neural networks proposed in [38], which alsovalidates Theorem 1.

Theorem 2 Eigen solution existence theorem. If the neuralnetworks {Wl,k,ij, bl} have solution {Nettr}, then for neuralnetworks {Wl,k,ij, bl}, there exists the eigen solution {Nettr}r.

Proof. Take the eigen acquisition function ψ x = x, thenSi′ r is given as

Si′r= f Si = Si 7

So Nettr r is given as

Nettr r = Nettr 8

Proof finished.

Theorem 3 Consistency theorem of complete solution andpartial solution. If the neural networks {Wl,k,ij, bl} have solu-tion {Nettr}, then for neural networks {Wl,k,ij, bl}, there existsthe eigen solution Nettr r . If the partial solution {Nettr}Partis gained by training the neural networks {Wl,k,ij, bi} withthe input processing object vector set {Si}, then the completesolution {Nettr}Max can be gained by training the neural

networks {Wl,k,ij, bi} with the input processing object vectorset {Si}. The consistency theorem of complete solution and par-tial solution shows that the output traits have an optionaleffect with joint action. The more the output traits are involvedin prediction, the more the output traits can be predicted.

Proof. The corresponding target function of the output targetvector set Ti Part is f , and the part solution is {Nettr}Partcorresponding to the mapping function f l of neural networks{Wl,k,ij, bl}. The solution exists, so the approximation of el = 0holds; the convergence and monotone decrease of el alsohold; f and f l are continuous too.

When the output target vector set is Ti Max, the targetfunction is still f , and the complete solution is {Nettr}Maxcorresponding to the mapping function f l of neural net-works {Wl,k,ij, bl}, but the target vector set nodes increase(see Figure 4).

Therefore, the number of effective hidden nodes isincreased. According to the consistent approximation theo-rem, el = 0 is more strictly approximated, that is, the trainedneural networks {Wl,k,ij, bl} with input object vector set {Si}can also get a complete solution {Nettr}Max.

Proof finished.

Theorem 4 No solution theorem. If the given trained neuralnetworks {Wl,k,ij, bl} with input processing object vector set {Si}have no complete solution, then the trained neural networks{Wl,k,ij, bl} with input processing object vector set {Si} haveno partial solution. The no solution theorem shows that ifthe analyzed data objects are random and there is no nonran-dom trait, then the predicted result of the deep learningcontroller is also random; the controller cannot find nonran-dom rule with random inputs. In other words, the predictionof deep learning controller cannot be out of nothing.

Proof.When the output target vector set is Ti Max, the targetfunction is f , and the complete solution corresponding to themapping function f l of neural networks {Wl,k,ij, bl} has nosolution, so el = 0 does not hold.

The target function corresponding to target vector setTi Part is f ; the input processing object vector set for

training neural networks {Wl,k,ij, bl} is {Si}. Here, the num-ber of target vector set nodes is reduced; the number n ofeffective nodes in the hidden layer is reduced, so el = 0does not hold more strictly, i.e., the trained neural networks

…… …

… ……

……

……

t0 t0 + 1 t0 + 2 t0 − 1 t

w(t0 + 1) w(t0 + 2) w(t)

Figure 3: Recurrent neural networks are expanded into feedforward neural networks.

5Complexity

Page 6: Eigen Solution of Neural Networks and Its Application in ...

{Wl,k,ij, bl} with input processing object vector set {Si} haveno partial solution.

Proof finished.

3. Prediction and Analysis of ControllerParameters of Grinding Robot

3.1. Grinding Robot Controller Model. In grinding robot servocontrol system, dynamic real-time adaptive positioningmechanism was added to the traditional robot servo controlsystem driven by the machine vision system. Our grindingrobot controller model describes the closed-loop controlprocess for robot grinding (see Figure 5).

The dynamic real-time adaptive specific controllingmethod of grinding robot is that the image It of processingcasting block surface is obtained using vision measurementand positioning system in time t. Lt and Wt are the lengthand width of corresponding defective area waiting to handle.The corresponding handling is done to obtain an appropriateknowledge feature value St given as

St = It Lt ,Wt , F f lat 9

In our research, we set St = Lt∗Wt

∗F f lat; F f lat is theroughness of casting block surface.

The speed vp and position p of the robotic arm end, thefeed speed Vc of the grinding wheel translational motion,the peripheral speed Vw of the grinding wheel rotationalmotion, the axial displacement f a of casting block motionof the block, and the radial displacement f r of wheel motionare obtained at time t using the feedback informationobtained from the real-time knowledge feature value St .

The speed of robotic arm end coincides with the speedof grinding wheel, Vc t = vp t = F1 St = k1St , Vw t =F2 St = k2St , f a t = F3 St = k3St , f r t = F4 St = k4St ;the coefficients k1–k4 are determined by experiment basedon casting block material.

The robot controller achieves adaptive control by thedynamic real-time feedback information wherein Fr is thereference force, qr and qr−n are the joint angles, qgr and qgr−nare the robot joint angular velocities, τ is the joint drivingmoment, f is the force applied to the casting block by theend of the robot, qm is the feedback joint angle, and qgm isthe feedback joint angular velocity.

The working process algorithm of the grinding robotcontroller model describes the closed-loop control process(see Figure 6).

First, the surface image I of block is gained with visionmeasurement and positioning system, and then, the geomet-rical features of image I is obtained with vision measurementand positioning system too. By judging the quality of thecasting block, the grinding decision is made. When thesurface of block does not meet the quality requirement, therobot grinding traits are got from the geometrical featuresof image I with robot grinding controller in the grindingrobot servo control system. Last, the grinding robot executesthe grinding operation with the robot grinding traits, and thesurface image I of block feeds back to the vision measure-ment and positioning system to form a closed loop.

The working process algorithm describes the specific con-trolling method of grinding robot controller (see Figure 7).

After the geometrical features of image I are obtained,robot grinding traits are generated with grinding robot servocontrol system in the controller, which is composed of

……

……

fl (Net = {Wl,k,ij, bl})�휓 (Input = {Si}) f (Output = {Ti})

Figure 4: Changing of neural network target vector set.

qm

qm du

dt

qr−n

qr−n

qr

τ f

Vc Vwrfa fr

St

p

vpKinematics

module

Force controlmodule

Vision measurementand positioning system

Position servomodule

Robot Processingprocedure

qr

Fr

I

Figure 5: Robot grinding controller model.

6 Complexity

Page 7: Eigen Solution of Neural Networks and Its Application in ...

prediction model of trained deep neural networks. Then, thejoint angles are computed from robot grinding traits withkinematics module, and the robot joint angular velocitiesare computed from the joint angles with force control mod-ule. The grinding robot executes the grinding operation withthe robot grinding traits, joint driving moment, and jointangular velocities.

3.2. Prediction Model of Controller Parameters of GrindingRobot. The research object in our study is the robot grindingprocess of casting block of engine. Block is one of the impor-tant parts of the engine. The quality of casting block engine

directly affects the automobile performance. With the devel-opment of automobile engine technology, the dimensionalaccuracy and mechanical performance of engine block arerequired for high-quality casting block. The automobileengine of study is 1.5 L; maximum outline dimension of theblock is 400mm× 320mm× 253mm, and its weight is 38 kgof HT300 material.

The controller parameters of grinding robot affectgrinding efficiency and quality which includes the feed speedVc of grinding wheel translational motion, the peripheralspeed Vw of grinding wheel rotational motion, the axialdisplacement f a of workpiece motion of the block, and the

Finish

Obtaining surface image I of block

Meet the qualityrequirement?

Obtaining the geometrical features of image I

Judging the quality of casting block

Getting the robot grinding traits with robot grinding controller

Grinding the block with grinding robot

Start

N

Y

Figure 6: The working process algorithm of grinding robot controller model.

Getting the robot grinding traits with grinding robot servo control system

Geometrical features of image I

Grinding the block with grinding robot

Computing the joint angles with kinematics module

Computing the robot joint angular velocities with force control module

Obtaining the joint driving moment with position servo control module

Figure 7: The working process algorithm of controlling method.

7Complexity

Page 8: Eigen Solution of Neural Networks and Its Application in ...

radial displacement f r of wheel motion (see Figure 8). Thespecific controller parameters of grinding robot are theoutput data, and their type and number are related to themachining method. The block is located on the bracket oftrack, while the grinding wheel is located on the actuator sideof the robot. During the process, the workpiece is fixed,whereas the grinding wheel moves.

According to DIN EN ISO 4288 and ASME B46.1 stan-dards, the input parameters are Fin= (Ra, Rz, Ry) featuresof workpiece surface morphology; Ra is the arithmeticalmean deviation of the workpiece surface morphology; Rz isthe point height of irregularities of the workpiece surfacemorphology, and Ry is the maximum height of the workpiecesurface morphology. The value of Ra of the workpiece surfaceafter grinding ranges from 0.01μm to 0.8μm [40]. Features(Ra, Rz, Ry) are the statistical extract and dimensionality ofsurface morphology and cannot characterize all the surfacemorphology. Features (Ra, Rz, Ry) are only the empiricaldescription of surface morphology and cannot characterizeessential features of surface morphology. Features (Ra, Rz,Ry) are only the statistical average of surface morphologywith a lot of information loss. The standards of DIN ENISO 4287 and ASME B46.1 give more explicit and moreinformation of surface morphology using four features,which are surface rugosity, standard deviation, skewness,and kurtosis [39, 41, 42], presented in (10)–(13). But thesefeatures still cannot work on all the pixels of surface mor-phology and cannot give a significant and systematicdescription statistics.

Ra =1I

I

0Z x dx, 10

Rq =1I

I

0Z2 x dx, 11

Rsk =1R3q

1I

I

0Z3 x dx, 12

Rku =1R4q

1I

I

0Z4 x dx 13

In order to solve the above problems, we improved thedefinition of surface rugosity, standard deviation, skewness,and kurtosis in [26–28]. Alternatively, we give a unifieddefinition of surface morphological features with definitemathematical meanings; the features are corresponding tothe first moment, second moment, third moment, and fourthmoment, respectively. On the other hand, the definitions offeatures are extended to all pixels of surface morphologicalimage, which can reduce the information loss to optionalextent, while extracting the essential features at the sametime. The features are defined in (14)–(17) by improvingthe method of calculating the depth from the gray level [43].

Iij =γ × In × cos α

gij− f − uA, 14

μ = 1A∗B

〠A

i=1〠B

j=1Iij, 15

σ = 1A∗B

〠A

i=1〠B

j=1Iij − μ 2, 16

Mij,k =Iij − μ k

σk, k = 1, 2, 3, 4 17

Here, γ is the reflectivity of the block surface to theincident light. In is the intensity of block surface. In × cos αis the vertical component of the intensity of block surface. fis the focal length of the camera. uA is the object distancefrom camera to block surface. gij is the gray value of pixelof block surface image I in position (i, j). Iij represents thedepth value of pixel of block surface image I in position(i, j). A and B are the maximum values of coordinates iand j; μ is the average depth of all pixels of block image,and σ is the mean square deviation of the depth of all pixelsof block image. Mij,k is the moment of the depth of the pixelin position (i, j). Mij,1 is the first moment of the depth of thepixel in position (i, j), denoting the rugosity of surface imagein position (i, j). Mij,2 is the second moment of the depth ofthe pixel in position (i, j), representing the standard deviationof surface image in position (i, j).Mij,3 is the third moment ofthe depth of the pixel in position (i, j), signifying the skew-ness of surface image in position (i, j). Mij,4 is the fourthmoment of the depth of the pixel in position (i, j), indicatingthe kurtosis of surface image in position (i, j).

Mij,1 , Mij,2 , Mij,3 , and Mij,4 are called subgraphsof rugosity, standard deviation, skewness, and kurtosis ofgeneralized surface morphology which forms the basis ofthis study.

3.3. Grinding Robot Prediction Algorithm of ControllerParameters of Grinding Robot. Grinding parameter predic-tion is completed by the trained deep neural networks (seeFigure 9). The trained deep neural networks are part of thegrinding robot controller. The surface image I of part ofblock is analyzed by algorithm, then the geometrical featuresof the obtained surface image are obtained, including the

Vc

fr

fa

Vw

Casting block

Grinding robot

Grinding wheel & grinding head

Track & bracket

Figure 8: Parameters of grinding robot.

8 Complexity

Page 9: Eigen Solution of Neural Networks and Its Application in ...

rugosity Mij,1, standard deviation Mij,2, skewness Mij,3, andkurtosisMij,4 of surface image I in position (i, j). The featurevalues are put into the well-trained deep neural networks,and the values of robot grinding traits are obtained whenthe network computing is finished. Then, the grinding robotcontroller sends out the control instruction to other insti-tutions of the robot.

In the constructed model, the surface roughness of blocksurface was obtained by analyzing the surface of the work-piece, including the corresponding moment features. Allthe moment features of image are input into the grindingcontroller, which describes the roughness of the workpiecesurface and determines the feed speed Vc, the peripheralspeed Vw, the axial displacement f a, and the radial displace-ment f r of the grinding robot actuator.

It is a complex nonlinear relationship between rugosityMij,1, standard deviation Mij,2, skewness Mij,3, kurtosis Mij,4and feed speed Vc, peripheral speed Vw, axial displacementf a, radial displacement f r in (18). The nonlinear relationshipbetween them is described using deep neural networks.

Fout Vc, Vw, f a, f r = f M1,M2,M3,M4 18

Further, we give the process of generating the deep neuralnetworks (see Figure 10). The feature subgraphsMij,k of partof block surface image I is generated after the surface image Ibeing processed with the grinded part of block. The initialvalues of weights and biases are initialized in deep neuralnetworks. The deep neural networks are trained using the(I, Fouts) training set that had been marked with empiricalknowledge. I is the block images and Fouts are the outputof grinding features corresponding to I. Hence, the traineddeep neural networks are fine-tuned to get one stage resultof deep neural networks. This iterating process continuesuntil all the data in training set are completed to get thewell-trained deep neural networks.

Our proposed model has the characteristics of multifea-tures and multitargets, which makes the model complicated.It needs to consider the correlation between rugosity, stan-dard deviation, skewness, kurtosis and their influence ondeep neural networks. It also needs to consider the correla-tion between feed speed Vc, peripheral speed Vw, axial

displacement f a, radial displacement f r and their influenceon deep neural networks. Further, the correlation betweenrugosity, standard deviation, skewness, kurtosis and feedspeed Vc, peripheral speed Vw, axial displacement f a, radialdisplacement f r needs to be considered. The agreement ofdeep neural networks should be considered, that is, the neu-ral network nodes are independent of each other and overfit-ting phenomena that do not conform to the agreement.

The inputs of our model have several feature subgraphsrather than a single graph. Our model predicts the grindingtraits, not just the classification of a given object. Thus, themodel processes several feature subgraphs to predict severalgrinding traits with multi-input and multioutput relation-ship, which is different from one to one relationship of singlepredetermined object classification.

3.4. Grinding Robot Prediction Algorithm of ControllerParameters of Grinding Robot. The images of block surfaceafter grinding are taken using vision system, and all theimages are labeled with corresponding grinding traitparameter values, as shown in (17). There are 400 labeledtraining data measured by the experiment according toyears of working experience as skilled technicians.

In (14), gij is the gray value of block surface image; thereflectance γ of block surface is 0.65; the focus f is 50mm;the object distance to block surface is 0.5m; α takes 0 forthe use of uniform light source, and the intensity of blocksurface In is 1000 lx. The extended 3D shape of block surfaceand its feature are obtained by (17). Gray, Ra, M1, M2, M3,and M4 are gray level, roughness, rugosity, standard devia-tion, skewness, and kurtosis; all these features are the inputsof deep neural network controller.

We obtain the correlation between the input featureparameters of the neural network controller (see Figure 11).It can be seen that the correlations among the parametersare very complex, with features of positive and negativecorrelations, and the degrees of correlation are also differentand cannot be expressed with analytical formula.

And the correlation between the output parameters of theneural network controller is also obtained (see Figure 12).The output parameters act as robot grinding traits. It canbe seen that the correlations among the parameters are also

Obtaining surface image I of part of block

Rugosity Standard deviation Skewness Kurtosis

The trained deep neural networks

Getting the robot grinding traits

Mij,1 Mij,2 Mij,3 Mij,4

Figure 9: Prediction algorithm framework of controller parameters of grinding robot.

9Complexity

Page 10: Eigen Solution of Neural Networks and Its Application in ...

Image I of certain block surface

Feature sub-graphs Mij,k with image IParameter initialization

of deep neural

Training deep neural networks

Tuning deep neural networks

Meeting theconditions or not?

Stage trained deep neural networks

Well trained deep neural

Image set trainingis finished or not?

Figure 10: The process of generating the deep neural networks.

Gray average valueM1 average valueM2 average value

M3 average valueM4 average value

40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 40020Times

102030405060708090

100110120130140150160170180190200210220

Trai

ts (a

vera

ge v

alue

)

Figure 11: The correlation between the input feature parameters.

10 Complexity

Page 11: Eigen Solution of Neural Networks and Its Application in ...

very complicated with features of both positive and negativecorrelations. The degrees of correlation are also different andcannot be expressed with analytical formula.

3.5. Predicted Result Analysis of Controller Parameters ofGrinding Robot. In the experiment, the training time of deeplearning neural networks for block grinding is finished within78 hours and 32 minutes with computing environment ofWindows Server 2010, Intel Core i5-4308U CPU, 2.80GHz,and RAM 8.00GB. The prediction time of trained deeplearning neural networks for block grinding is computedwithin 2 seconds.

In our controller, the model is implemented on deeplearning open-source framework Tiny-dnn [44]. In deeplearning neuron network prediction training model, theminibatch_size is 4, the num_epochs are 3000, the activationfunction is tanh, and its value ranges from −0.8 to +0.8. In theinput layer, the input labeled data are 400, and the defaultnormalized values nlenth × nwidth are [32× 32]. In the featureextraction layer, convolution and sampling are performed foreach feature subgraph; the convolution kernel is [5× 5], andthe sampling kernel takes [2× 2]. In the fuzzy analysis layer,the number of jump positions ns is 10, the quantum intervalinitial value θrj is 0.1, and the output layer is with full join.The inputs include subgraphs of rugosity Mij,1 , standarddeviation Mij,2 , skewness Mij,3 , and kurtosis Mij,4 ofthe block surface morphology, and each feature subgraphworks independently. The full join between the fuzzy quan-tum analysis layer and output layer of deep neuron networkis processed with all the feature subgraphs. The outputsare the feed speed Vc, the peripheral speed Vw, the axialdisplacement f a, and the radial displacement f r .

3.5.1. Deep Learning Neuron Network Training Results. Thetraining results of deep learning neuron network of blocksurface controller parameters of grinding robot are obtainedwith samples increasing (see Figure 13).

The accuracy of all the output features is more than 80%with the prediction of deep learning neuron network control-ler under the given empirical labeled data set. The increasingrate of grinding trait accuracy is among the fastest stage whenthe samples are less than 600. The accuracy of all the grindingtraits reaches more than 95% when samples are 1000, and theaccuracy still slowly increases with the increasing of trainingtimes. In the final training result, the prediction accuracies offeed speed Vc, peripheral speed Vw, axial displacement f a,and radial displacement f r are above 98% compared to thelabeled training data set.

Similarly, the training results of deep learning neuronnetwork of block surface controller parameters of grind-ing robot are obtained with training times increasing(see Figure 14).

The accuracy of all the output features is more than 80%with the prediction of deep learning neuron network control-ler under the given empirical labeled data set. The increasingrate of grinding trait accuracy is among the fastest stage whentraining times are less than 600. The accuracy of all thegrinding traits reaches more than 95% when training timesare 1500, and the accuracy still slowly increases with theincreasing of training times. In the final training result, theprediction accuracies of feed speed Vc, peripheral speed Vw, axial displacement f a, and radial displacement f r are above97% compared to the labeled training data set.

From the training results of neural networks in Figures 13and 14, it can be seen that the prediction accuracy of the deep

Peripheral speedFeed speed

Radial displacementAxial displacement

Trai

ts (a

vera

ge v

alue

)

0

40 60 80 100 120 14020 180 200 220 240 260 280 300 320 340 360 380 400160Times

Figure 12: The correlation between the output parameters.

11Complexity

Page 12: Eigen Solution of Neural Networks and Its Application in ...

learning neural networks increases monotonically with theincrease of sample size and number of training. And thesample size displays a bigger influence on the accuracy ofsimulating result than the number of training.

The prediction error of feed speed Vc, peripheral speedVw, axial displacement f a, and radial displacement f r withthe deep learning neural networks can be defined as

el,Vc = 1 − accurateVc, 19

el,Vw = 1 − accurateVw, 20

el,f a = 1 − accurate f a , 21

el,f r = 1 − accuratef r 22

From (19)–(22), we can see that el,Vc, el,Vw, el,f a , and el,f rconverge and monotonically decrease. The experimentalresults verify the correctness of Theorem 1 (consistentapproximation theorem).

3.5.2. Predicted Experimental Results with Multioutput. Thetrained deep neural networks are used to predict the traitsof casting block grinding. The sample number of data is100, which is used as the input of the trained deep neuralnetworks. The predicted results of feed speed Vc, peripheralspeed Vw, axial displacement f a, and radial displacement f rof grinding wheel are obtained accordingly (see Figures 15–18). The black curve represents the labeled grinding trait dataof block surface. The other color curves are the predictedresults using the trained deep neural networks, and the aver-age accuracies of predicted results are given (see Table 1).

From Figures 15–18, we can get that gij [32 × 32],gij [48 × 48], and gij [52 × 52] are input data sets of dif-

ferent sizes; Mij,1 [32 × 32], Mij,2 [32 × 32], Mij,3 [32 × 32],and Mij,4 [32 × 32] are the feature data set that gij [32 × 32]

transformed, and each input data set corresponds to therespective output data sets {Vc, Vw, f a, f r}. In the givenset of input feature data, at least one data set can be foundto get the eigen solution Nettr r of solution {Nettr} of

Accuracy rate of peripheral speedAccuracy rate of axial displacement

Accuracy rate of feed speedAccuracy rate of radial displacement

0.760.78

0.80.820.840.860.88

0.90.920.940.960.98

Accu

rate

(%)

400 600 800 1,000 1,200 1,400 1,600 1,800 2,000 2,200 2,400 2,600 2,800 3,000200Sample size

Figure 13: Trends of accuracy changing of deep learning neuron network for casting block grinding with samples increasing.

0.760.78

0.80.820.840.860.88

0.90.920.940.96

Accu

rate

(%)

200 600 800 1,000 1,200 1,400 1,600 1,800 2,000 2,200 2,400 2,600 2,800 3,000400Times

Accuracy rate of peripheral speedAccuracy rate of axial displacement

Accuracy rate of feed speedAccuracy rate of radial displacement

Figure 14: Trends of accuracy changing of deep learning neuron network for casting block grinding with training times increasing.

12 Complexity

Page 13: Eigen Solution of Neural Networks and Its Application in ...

Predicted result with 52Predicted result using M3 with 32

Predicted result using M1 with 32

Machining parameter: feed speed

Original data

Predicted result with 32 Predicted result using M2 with 32Predicted result with 48Predicted result using M4 with 32

0.140.160.18

0.20.220.240.260.28

0.30.320.340.360.38

0.4

Valu

e (m

m/s

)

20 35 75 955 25 40 45 50 55 60 908515 8010 30 65 10070Times

Figure 15: Predicted results of feed speed of grinding wheel.

2

3

4

5

Valu

e (m

/s)

10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 1005Times

Predicted result with 52Predicted result using M3 with 32

Predicted result using M1 with 32Original data

Predicted result with 32 Predicted result using M2 with 32Predicted result with 48Predicted result using M4 with 32

Machining parameter: peripheral speed

Figure 16: Predicted results of peripheral speed of grinding wheel.

Machining parameter: axial displacement

0.080.1

0.120.140.160.18

0.20.220.240.260.28

0.30.320.340.360.38

0.40.420.44

Val

ue (�휇

m)

10 604540 75 1008015 705520 85 905025 30 65 95355Times

Predicted result with 52Predicted result using M3 with 32

Predicted result using M1 with 32Original data

Predicted result with 32 Predicted result using M2 with 32Predicted result with 48Predicted result using M4 with 32

Figure 17: Predicted results of axial displacement of grinding wheel.

13Complexity

Page 14: Eigen Solution of Neural Networks and Its Application in ...

neural networks {Wi,k,ij, bi} corresponding to the data setof gij [32 × 32], and the prediction accuracy of eigen solutionNettr r is equivalent to that of neural network solution{Nettr} corresponding to the data set of gij [32 × 32], so thecorrectness of Theorem 2 (eigen solution existence theorem)is verified. The following conclusions are drawn fromFigures 15–18 and Table 1 with the average accuracies ofpredicted results.

(1) The trained deep learning neural networks of ourmodel can predict the feed speedVc, peripheral speedVw, axial displacement f a, and radial displacement f rsimultaneously, and their average minimum predic-tion accuracies are more than 95%. The more pixelsthe input data samples from the image, the higherthe correlation prediction accuracy. The green, blue,and red curves are the correlation predicted resultsof our model with pixels [32× 32, 48× 48, and52× 52] where the accuracy is improved by 1%in turn

(2) Our proposed deep neuron network can well reflectthe influence between the features of rugosity

Mij,1 , standard deviation Mij,2 , skewness Mij,3 ,kurtosis Mij,4 and the robot grinding traitsdescribed as follows. (a) There are several features,at least one feature of rugosity Mij,1 , standarddeviation Mij,2 , skewness Mij,3 , kurtosis Mij,4playing function on all the grinding traits of feedspeed Vc, peripheral speed Vw, axial displacementf a, and radial displacement f r . (b) The joint predic-tion accuracy of robot grinding traits with all blocksurface features is within the same precision level ofthe optimal prediction accuracy of each independentrobot grinding trait as rugosity Mij,1 , standarddeviation Mij,2 , skewness Mij,3 , or kurtosisMij,4 . Furthermore, the joint prediction accuracyis even sometimes superior to the single featureoptimal prediction precision. As seen in Figure 17,the joint prediction accuracy of axial speed is supe-rior to the optimal prediction precision of rugosityMij,1 . All the joint predicted results of robotgrinding traits obtain the optimal prediction accu-racy at the same time where the joint predictedresults do not interfere with each other. (c) Different

Machining parameter: radial displacement

1

2

3

Valu

e (m

m)

10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 1005Times

Predicted result with 52Predicted result using M3 with 32

Predicted result using M1 with 32Original data

Predicted result with 32 Predicted result using M2 with 32Predicted result with 48Predicted result using M4 with 32

Figure 18: Predicted results of radial displacement of grinding wheel.

Table 1: Average accuracies of predicted results with multioutput.

Average accuracy Vc (mm/s) Vw (m/s) f a (μm) f r (mm)

gij [32 × 32] A32 0.967164 0.975184 0.960752 0.950227

gij [48 × 48] A48 0.970117 0.979107 0.958325 0.953480

gij [52 × 52] A52 0.972249 0.980596 0.962713 0.958631

Mij,1 [32 × 32] AM1 0.967741 0.976567 0.955851 0.951604

Mij,2 [32 × 32] AM2 0.944570 0.948410 0.914060 0.907100

Mij,3 [32 × 32] AM3 0.928147 0.929163 0.887634 0.887930

Mij,4 [32 × 32] AM4 0.829836 0.75911 0.745841 0.744105

14 Complexity

Page 15: Eigen Solution of Neural Networks and Its Application in ...

surface feature has different influence on the robotcontroller parameters of the grinding robot. Here,the influence of rugosity Mij,1 , standard deviationMij,2 , skewness Mij,3 , and kurtosis Mij,4 are indescending order. As shown in Figures 15, 16,and 18, the average prediction accuracy of rugosityMij,1 is higher than that of the joint predictionaccuracy when the dotted curve is the image withpixels [32× 32]. The predicted results of standarddeviation Mij,2 and skewness Mij,3 have largeramplitude fluctuation, and their accuracies are alsoless than that of the joint accuracy. The predictedresult of kurtosis Mij,4 is a line; the correspondingcontroller almost has no prediction capability, andthe prediction accuracy is only a random probability

(3) There are severe deviations in times 78–83 and 87–95as shown in Figure 15. Also, similar deviation isformed in times 4 and 85–95 as shown in Figure 10.In times 68–72 and 86–92 as shown in Figure 17, aswell as in times 66–68 and 84–94 as indicated inFigure 18, severe abnormality were recorded, whichare caused by the emergence of new data samples.The solution to this problem is that new data samplesare labeled and added to the training set. This modelcan be used to obtain reasonable predicted resultwhen the similar problem occurs in the future

Therefore, given the standard image of grinded blocksurface, we can accurately get the optional configurationvalues of the feed speed Vc, peripheral speed Vw, axialdisplacement f a, and radial displacement f r using the well-trained deep neuron network controller.

3.5.3. Predicted Experimental Results with Single Output. Inthe independent experiment with single output, the multi-to-one deep learning-oriented prediction model is adopted,and the inputs are the moment feature graphs of grinding

casting block surface corresponding to the first, second, third,and fourth moments. Each of the moment feature graphsworks in a relatively independent network. The outputparameter between the fuzzy analysis layer and the outputlayer of network is single output with peripheral speed Vw, feed speed Vc, axial displacement f a, or radial displace-ment f r . The predicted results are obtained (see Figures 19–22). The average accuracies of predicted results are given(see Table 2).

In the independent experiment, gij [32 × 32], gij [48 × 48],and gij [52 × 52] are input data sets of different sizes;Mij,1 [32 × 32], Mij,2 [32 × 32], Mij,3 [32 × 32], and Mij,4

[32 × 32] are the feature data set that gij [32 × 32] transformedto, and each input data set corresponds to the respectiveoutput data set {Vc}, {Vw},{f a}, or {f r}. The solution ofindependent single-output experiment is partial solutionNettr Part,i of multioutput experiment complete solutionNettr Max,i. The partial solution Nettr Part,i predicted

results are shown in Figures 19–22. Data sets Mij,1 ofgij [32 × 32], gij [48 × 48], and gij [52 × 52] have partial

solution for all output data set of {Vc}, {Vw},{f a}, or {f r}.Similarly, Mij,3 has partial solution for {Vw}. In the multi-output experiment, the feature data sets Mij,1 , Mij,2 ,Mij,3 , and Mij,4 of input data sets gij [32 × 32], gij

[48 × 48], and gij [52 × 52] have independent multioutputexperiment complete solutions Nettr Max,i correspondingto {Vc, Vw, f a, f r}, as shown in Figures 15–18. All Mij,1 ,Mij,2 , and Mij,3 have complete solutions Nettr Max,i

except for Mij,4 . From the above, we can draw the conclu-sion that a partial solution must correspond to a completesolution. Thus, our experiment verifies the correctness ofTheorem 3 (consistency theorem of complete solution andpartial solution). The independent experiment shows thatMij,4 has no complete solutions Nettr Max,i correspond-

ing to {Vc, Vw, f a, f r}, thus verifying the correctness ofTheorem 4 (no solution theorem).

Machining parameter: peripheral speed

1510 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 1005Times

1

2

3

4

5

Valu

e (m

/s)

Predicted result with peripheral speed 32Predicted result using M2 with peripheral speed 32

Predicted result with all 32

Original data

Predicted result using M3 with peripheral speed 32Predicted result using M1 with peripheral speed 32

Predicted result using M4 with peripheral speed 32

Figure 19: Predicted results of peripheral speed of grinding wheel.

15Complexity

Page 16: Eigen Solution of Neural Networks and Its Application in ...

As shown in Figure 19, the black curve is the originalempirical data of the peripheral speed of grinding wheel.The other curves are the predicted results of deep learning-oriented controllers. From the figures and the averageaccuracies in Table 2, we draw the following conclusions.(1) When output parameter is only the feed speed, thesingle-output prediction accuracies of each moment are inthe same percentage point scope with the multioutput pre-diction accuracies of each moment, as the blue and the redcurve shows; the multioutput prediction accuracies do notinterfere with each other for the sake of interaction of multi-output parameters. (2) In this experiment, only the firstmoment has function, and other moments have no function,as shown with dotted straight lines in the figures. Thepredicted results of the second, the third, and the fourth

moments are only random results. The multioutput predic-tion accuracies are consistent with the first moment pre-dicted result, which means that the first moment hassuperb effect in the multioutput prediction. (3) There are 3nonfunctional moments in single-output experiment, whichare 2 more nonfunctional moments than in multioutputexperiment. This phenomenon shows that there existssuperb effect moment between the moments functioning inmultioutput experiment.

The predicted result of feed speed of grinding castingblock surface is similar to the predicted result of peripheralspeed as shown in Figure 20. The differences are shown asfollows. (1) Compared with Figure 19, the second and thefourth moments do not work in this experiment; thepredicted experimental accuracy of the fourth moment with

Machining parameter: feed speedggggggggggg ppppppppppppp ppppppppppppp

0.140.180.18

0.20.220.240.260.28

0.30.320.340.360.38

0.4

Valu

e (m

m/s

)

4515 20 25 30 35 4010 50 55 60 8070 75 855 90 95 10065Times

Predicted result with feed speed 32Predicted result using M2 with feed speed 32

Predicted result with all 32

Original data

Predicted result using M3 with feed speed 32Predicted result using M1 with feed speed 32

Predicted result using M4 with feed speed 32

Figure 20: Predicted results of feed speed of grinding wheel.

Machining parameter: radial displacementggggggggggggg pppppppppppp pppppppppppp

35 9545 803020 4010 50 55 8565 70 75 90 1005 2515 60Times

0

1

2

3

4

Valu

e (m

m)

Predicted result with radial displacement 32Predicted result using M2 with radial displacement 32

Predicted result with all 32

Original data

Predicted result using M3 with radial displacement 32Predicted result using M1 with radial displacement 32

Predicted result using M4 with radial displacement 32

Figure 21: Predicted results of radial displacement of grinding wheel.

16 Complexity

Page 17: Eigen Solution of Neural Networks and Its Application in ...

single output is reduced by 10%, which also shows that thereexists superb effect moment between the moments function-ing in multioutput experiment. (2) The first and thirdmoments function in this experiment, which means thatthe inputting feature parameters are determined by thegrinding surface morph itself but not the order of moments.

From the above analysis, it can be seen that (1) there isexistence of high prediction accuracy consistent with multi-input and multioutput prediction, multi-input and single-output prediction, and single-input and single-outputprediction. The optimal prediction accuracy is obtained atthe same time for all the predicted results, which do not inter-fere with each other for the superb affection existing. (2) Also,the optimal prediction accuracy consistent with multi-inputand multioutput prediction, multi-input and single-outputprediction, and single-input and single-output predictionfully demonstrate that the grinding data do have inherentlaw and can be captured and predicted by the designed deeplearning neural network controller. In order to further verifythis, we conducted experiment with random input as follows.

3.5.4. Predicted Experimental Results with Random Output.In the random-output experiment, we adopt the same

proposed deep learning-oriented prediction model, and theinputs are the moment graph of the grinding surface morphdata. Each moment graph works in relatively independentnetwork. All the output data values of the feed speed Vc,peripheral speed Vw, axial displacement f a, and radialdisplacement f r are randomly generated by computer simu-lation. The predicted results of random-output experimentare obtained (see Figures 23–26). The black curves are gener-ated by random data, and the blue curves are the predictedresults with our controller. The average accuracies of pre-dicted results with random output are given (see Table 3),and the prediction results are in accordance with the randomdata. That is to say, we get the random results.

From the analysis of random-output experiment, we get avery interesting result. If the analyzed object data are gainedin random, there is no nonrandom feature. The deep learningcontroller gets random predicted result, which cannot befound in nonrandom laws. In other words, the deep learningprediction controller cannot be out of nothing, which provesthe validity of the proposed model from another side. Thestronger the law, the more the input features function, thehigher the predicted accuracy. The random-output experi-ment further verifies the correctness of Theorem 4.

Machining parameter: axial displacement

00.05

0.10.15

0.20.25

0.30.35

0.40.45

Valu

e (�휇

m)

20 45 75 9010 15 40 60 958550 10055 8065353025 705Times

Predicted result with axial displacement 32Predicted result using M2 with axial displacement 32

Predicted result with all 32

Original data

Predicted result using M3 with axial displacement 32Predicted result using M1 with axial displacement 32

Predicted result using M4 with axial displacement 32

Figure 22: Predicted results of axial displacement of grinding wheel.

Table 2: Average accuracies of predicted results with single output.

Average accuracy Vc (mm/s) Vw (m/s) f a (μm) f r (mm)

gij [32 × 32] A32 0.975184 0.967164 0.950027 0.960752

gij [32 × 32] Ar32 0.977520 0.972553 0.958349 0.964409

Mij,1 [32 × 32] ArM1 0.979840 0.907778 0.956100 0.938146

Mij,2 [32 × 32] ArM2 −0.57020 0.763325 0.595728 0.603273

Mij,3 [32 × 32] ArM3 −0.570200 0.929030 −4.42592 −4.63600

Mij,4 [32 × 32] ArM4 0.749110 0.763325 0.595728 0.603237

17Complexity

Page 18: Eigen Solution of Neural Networks and Its Application in ...

Machining parameter: peripheral speed

Original dataPredicted result with all 32

2

3

4

5

Valu

e (ra

ndom

dat

a)

45 505 25 30 35 401510 55 60 8070 75 8520 90 95 10065Times

Figure 23: Predicted results of peripheral speed of grinding wheel.

Machining parameter: feed speedMachining parameter: feed speedg p

0.120.140.160.18

0.20.220.240.260.28

0.30.320.340.360.38

Valu

e (ra

ndom

dat

a)

20 45 75 9010 15 40 60 958550 10055 8065353025 705Times

Original dataPredicted result with all 32

Figure 24: Predicted results of feed speed of grinding wheel.

Machining parameter: radial displacementMachining parameter: radial displacementh g a r i s e t

Original dataPredicted result with all 32

1

2

3

Valu

e (ra

ndom

dat

a)

20 45 75 9010 15 40 60 958550 10055 8065353025 705Times

Figure 25: Predicted results of radial displacement of grinding wheel.

18 Complexity

Page 19: Eigen Solution of Neural Networks and Its Application in ...

4. Conclusions

In our study, we give the eigen solution theory of generalneural networks and quantitatively describe the input envi-ronments, the macroscopic structure, output environments,and their relationship of neural networks. The eigen solutiontheory is applied and validated in the prediction of controllerparameters of grinding robot in complex environments withthe proposed deep learning neural networks, which willprovide theoretical support for a wider application with theproposed deep learning neural networks.

Data Availability

The data used to support the findings of this study areavailable from the supplementary materials.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by the National Science Founda-tion of China [51875266, 61379064, 61603326, 61772448],the Industry-Academia Prospect Research Foundation ofJiangsu [BY2016066-06, BY2015058-01], the National Stu-dents’ Innovation and Entrepreneurship Training Programof Jiangsu [201310324006, 2103242013006], the TenderedSpecial Fund Project on Promoting the Transformationof Scientific and Technological Achievements of Jiangsu[BA2015026], the Special Fund Project on Promoting the

Transformation of Scientific and Technological Achievementsof Jiangsu [BA2015078], and the Research Innovation Pro-gram for College Graduates of Jiangsu [KYLX16_0880].

Supplementary Materials

The data sets provided are the images of block surface aftergrinding taken using vision system. There are 400 trainimages in the imagetrain set and 100 train images in theimagetest set of inputdata sets. And there are 400 items inthe train set and 100 items in the test set of outputdata sets.Each item contains the values of feed speed, peripheral speed,axial displacement, and radial displacement. (SupplementaryMaterials)

References

[1] J. A. Dieste, A. Fernández, D. Roba, B. Gonzalvo, and P. Lucas,“Automatic grinding and polishing using spherical robot,”Procedia Engineering, vol. 63, pp. 938–946, 2013.

[2] Y. Jiang, C. Yang, J. Na, G. Li, Y. Li, and J. Zhong, “A briefreview of neural networks based learning and control and theirapplications for robots,” Complexity, vol. 2017, Article ID1895897, 14 pages, 2017.

[3] C. Yang, J. Na, G. Li, Y. Li, and J. Zhong, “Neural network forcomplex systems: theory and applications,” Complexity,vol. 2018, Article ID 3141805, 2 pages, 2018.

[4] C. Fu, L. Zhao, J. Yu, H. Yu, and C. Lin, “Neural network-basedcommand filtered control for induction motors with inputsaturation,” IET Control Theory & Applications, vol. 11,no. 15, pp. 2636–2642, 2017.

[5] W. He, D. Ofosu Amoateng, C. Yang, and D. Gong, “Adaptiveneural network control of a robotic manipulator withunknown backlash-like hysteresis,” IET Control Theory &Applications, vol. 11, no. 4, pp. 567–575, 2017.

[6] C. Yang, Y. Jiang, W. He, J. Na, Z. Li, and B. Xu, “Adaptiveparameter estimation and control design for robot manipula-tors with finite-time convergence,” IEEE Transactions onIndustrial Electronics, vol. 65, no. 10, pp. 8112–8123, 2018.

[7] C. Yang, K. Huang, H. Cheng, Y. Li, and C. Y. Su, “Hapticidentification by ELM-controlled uncertain manipulator,”

Machining parameter: axial displacement

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Valu

e (ra

ndom

dat

a)

201510 45 905550 953025 757035 10080 856040 655Times

Original dataPredicted result with all 32

Figure 26: Predicted results of axial displacement of grinding wheel.

Table 3: Average accuracies of predicted results with randomoutput.

Average accuracy Vc (mm/s) Vw (m/s) f a (μm) f r (mm)

gij [32 × 32] Ar 0.708348 0.712986 0.463122 0.347157

19Complexity

Page 20: Eigen Solution of Neural Networks and Its Application in ...

IEEE Transactions on Systems, Man, and Cybernetics: Systems,vol. 47, no. 8, pp. 2398–2409, 2017.

[8] M. I. Dieste-Velasco, M. Diez-Mediavilla, and C. Alonso-Tristán, “Regression and ANN models for electronic circuitdesign,” Complexity, vol. 2018, Article ID 7379512, 9 pages,2018.

[9] C. Wang, X. Liu, X. Yang, F. Hu, A. Jiang, and C. Yang,“Trajectory tracking of an omni-directional wheeled mobilerobot using a model predictive control strategy,” AppliedSciences, vol. 8, no. 2, 2018.

[10] B. S. Park, J. W. Kwon, and H. Kim, “Neural network-basedoutput feedback control for reference tracking of underactu-ated surface vessels,” Automatica, vol. 77, pp. 353–359, 2017.

[11] Y. Yin, H. Niu, and X. Liu, “Adaptive neural network slidingmode control for quad tilt rotor aircraft,” Complexity,vol. 2017, Article ID 7104708, 13 pages, 2017.

[12] B. Jiang, Q. Shen, and P. Shi, “Neural-networked adaptivetracking control for switched nonlinear pure-feedback systemsunder arbitrary switching,” Automatica, vol. 61, pp. 119–125, 2015.

[13] L. Song, Z. Duan, B. He, and Z. Li, “Application of federalKalman filter with neural networks in the velocity and attitudematching of transfer alignment,” Complexity, vol. 2018, ArticleID 3039061, 7 pages, 2018.

[14] C. Mu, D. Wang, and H. He, “Novel iterative neural dynamicprogramming for data-based approximate optimal controldesign,” Automatica, vol. 81, pp. 240–252, 2017.

[15] Z. Li, H. Cheng, and H. Guo, “General recurrent neuralnetwork for solving generalized linear matrix equation,”Complexity, vol. 2017, Article ID 9063762, 7 pages, 2017.

[16] X. Xu, S. Wei, C. Ma, K. Luo, L. Zhang, and C. Liu, “Atrialfibrillation beat identification using the combination ofmodified frequency slice wavelet transform and convolutionalneural networks,” Journal of Healthcare Engineering, vol. 2018,Article ID 2102918, 8 pages, 2018.

[17] R. Rao and S. Zhong, “Stability analysis of impulsive stochasticreaction-diffusion cellular neural network with distributeddelay via fixed point theory,” Complexity, vol. 2017, ArticleID 6292597, 9 pages, 2017.

[18] J.-E. Zhang, “Multisynchronization for coupled multistablefractional-order neural networks via impulsive control,” Com-plexity, vol. 2017, Article ID 9323172, 10 pages, 2017.

[19] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedingsof the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

[20] G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learningalgorithm for deep belief nets,” Neural Computation, vol. 18,no. 7, pp. 1527–1554, 2006.

[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNetclassification with deep convo-lutional neural networks,”Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.

[22] V. Mnih, K. Kavukcuoglu, D. Silver et al., “Human-level con-trol through deep reinforcement learning,” Nature, vol. 518,no. 7540, pp. 529–533, 2015.

[23] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature,vol. 521, no. 7553, pp. 436–444, 2015.

[24] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: object detectionvia region-based fully convolutional networks,” ComputerVision and Pattern Recognition, 2016, https://arxiv.org/abs/1605.06409.

[25] P. P. Brahma, D. Wu, and Y. She, “Why deep learning works: amanifold disentanglement perspective,” IEEE Transactions onNeural Networks and Learning Systems, vol. 27, no. 10,pp. 1997–2008, 2016.

[26] D. Silver, A. Huang, C. J. Maddison et al., “Mastering the gameof go with deep neural networks and tree search,” Nature,vol. 529, no. 7587, pp. 484–489, 2016.

[27] D. Nodland, H. Zargarzadeh, and S. Jagannathan, “Neuralnetwork-based optimal adaptive output feedback control of ahelicopter UAV,” IEEE Transactions on Neural Networks andLearning Systems, vol. 24, no. 7, pp. 1061–1073, 2013.

[28] W. Hou, X. Gao, D. Tao, and X. Li, “Blind image qualityassessment via deep learning,” IEEE Transactions on NeuralNetworks and Learning Systems, vol. 26, no. 6, pp. 1275–1286, 2015.

[29] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman,“Reading text in the wild with convolutional neural networks,”International Journal of Computer Vision, vol. 116, no. 1,pp. 1–20, 2016.

[30] M. Gong, J. Zhao, J. Liu, Q. Miao, and L. Jiao, “Changedetection in synthetic aperture radar images based on deepneural networks,” IEEE Transactions on Neural Networksand Learning Systems, vol. 27, no. 1, pp. 125–138, 2016.

[31] J. Xu, X. Luo, G. Wang, H. Gilmore, and A. Madabhushi, “Adeep convolutional neural network for segmenting andclassifying epithelial and stromal regions in histopathologicalimages,” Neurocomputing, vol. 191, pp. 214–223, 2016.

[32] D. P. Moeys, F. Corradi, E. Kerr et al., “Steering a predatorrobot using a mixed frame/event-driven convolutional neuralnetworks,” in 2016 Second International Conference onEvent-based Control, Communication, and Signal Processing(EBCCSP), pp. 1–8, Krakow, Poland, June 2016.

[33] Q. Wang, J. Wang, X. Huang, and L. Zhang, “Semiactive non-smooth control for building structure with deep learning,”Complexity, vol. 2017, Article ID 6406179, 8 pages, 2017.

[34] Y. Li, X. Meng, and Y. Ye, “Almost periodic synchroniza-tion for quaternion-valued neural networks with time-varying delays,” Complexity, vol. 2018, Article ID 6504590,13 pages, 2018.

[35] T. Botmart, N. Yotha, P. Niamsup, and W. Weera, “Hybridadaptive pinning control for function projective synchro-nization of delayed neural networks with mixed uncer-tain couplings,” Complexity, vol. 2017, Article ID 4654020,18 pages, 2017.

[36] M. Minsky, “Steps toward artificial intelligence,” in Computers& Thought, pp. 8–30, MIT Press, 1995.

[37] J. Schmidhuber, “Deep learning in neural networks: anoverview,” Neural Networks, vol. 61, pp. 85–117, 2015.

[38] G. B. Huang, L. Chen, and C. K. Siew, “Universal approxima-tion using incremental constructive feedforward networkswith random hidden nodes,” IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 879–892, 2006.

[39] J. Su and B. Xu, “Fabric wrinkle evaluation using laser triangu-lation and neural network classifier,” Optical Engineering,vol. 38, no. 10, pp. 1688–1693, 1999.

[40] American National Standards Institute B46.1-1978, SurfaceTexture, American Society of Mechanical Engineers, NewYork, NY, USA, 1978.

[41] J. Amirbayat and M. J. Alagha, “Objective assessment ofwrinkle recovery by means of laser triangulation,” Journal ofthe Textile Institute, vol. 87, no. 2, pp. 349–355, 1996.

20 Complexity

Page 21: Eigen Solution of Neural Networks and Its Application in ...

[42] J. Hu, B. Xin, and H. J. Yan, “Measuring and modeling 3Dwrinkles in fabrics,” Textile Research Journal, vol. 72, no. 10,pp. 863–869, 2002.

[43] N. Paragios, Y. Chen, and O. Faugeras, “Shape from shading,”in Handbook of Mathematical Models in Computer Vision,pp. 375–388, Springer Science Business Media, Inc., 2006.

[44] Tiny-dnn, “Deep learning framework,” https://github.com/nyanp/tiny-cnn.

21Complexity

Page 22: Eigen Solution of Neural Networks and Its Application in ...

Hindawiwww.hindawi.com Volume 2018

MathematicsJournal of

Hindawiwww.hindawi.com Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwww.hindawi.com Volume 2018

Probability and StatisticsHindawiwww.hindawi.com Volume 2018

Journal of

Hindawiwww.hindawi.com Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwww.hindawi.com Volume 2018

OptimizationJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Engineering Mathematics

International Journal of

Hindawiwww.hindawi.com Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwww.hindawi.com Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwww.hindawi.com Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwww.hindawi.com Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Hindawiwww.hindawi.com Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Di�erential EquationsInternational Journal of

Volume 2018

Hindawiwww.hindawi.com Volume 2018

Decision SciencesAdvances in

Hindawiwww.hindawi.com Volume 2018

AnalysisInternational Journal of

Hindawiwww.hindawi.com Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwww.hindawi.com