project-8

7
IEEE PES PowerAfrica 2007 Conference and Exposition Johannesburg, South Africa, 16-20 July 2007 Online Voltage Stability Monitoring and Contingency Ranking using RBF Neural Network B. Moradzadeh, S.H. Hosseinian, M.R. Toosi and M.B Menhaj Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Iran Email: [email protected] Abstract: Voltage stability is one of the major concerns in competitive electricity markets. In this paper, RBF neural network is applied to predict the static voltage stability index and rank the critical line outage contingencies. Three distinctfeature extraction algorithms are proposed to speedup the neural network training process via reducing the input training vectors dimensions. Based on the weak buses identification method, the first developed algorithm introduces a new feature extraction technique. The second and third algorithms are based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA) respectively which are statistical methods. These algorithms offer beneficial solutions for neural network training speed enhancement. In all presented algorithms, a clustering method is applied to reduce the number of neural network training vectors. The simulation results for the IEEE-30 bus test system demonstrate the effectiveness of the proposed algorithms for on- line voltage stability index prediction and contingency ranking. I. INTRODUCTION Voltage stability is defined as the ability of a power system to maintain steadily acceptable bus voltage at each node under normal operating conditions, after load increase, following system configuration changes or when the system is being subjected to disturbances like line outage, generator outage and etc [1]. Voltage collapse may be caused by a variety of single or multiple contingencies known as voltage contingencies in which the voltage stability of the power system is threatened [2]. Conventional evaluation techniques based on the full or reduced load flow Jacobian matrix analysis such as, singular value decomposition, eigenvalue calculations, sensitivity factor, and modal analysis are time consuming [3]-[5]. Therefore, they are not suitable for online applications in large scale power systems. Since the 1990s, extensive research work has been carried out on the application of neural networks to power system problems [6]. Artificial neural networks (ANNs) have shown great promise in power system engineering due to their ability to synthesize complex mappings accurately and fast. Most of the published work in this area utilize multilayer perceptron (MLP) model based on back propagation (BP) algorithm, which usually suffers from local minima and overfitting problems. Some research work has been devoted to neural network applications to voltage security assessment and monitoring [7]-[1 1]. Multi-layered feed-forward neural networks have been used for power margin estimation associated with static voltage stability by means of different training criteria and algorithms. The active and reactive powers on load and generation buses are frequently used as the inputs to the multi-layered feed-forward neural network [7]-[10]; bus voltages and angles are also part of the inputs in some research work [7], [9]; and in [11], the active and reactive power flows of some selected lines are also used as inputs. Radial Basis Function Network (RBFN), with nonlinear mapping capability, has become increasingly popular in recent years due to its simple structure and training efficiency. RBFN has only one nonlinear hidden layer and one linear output layer. RBFN has been applied for active power contingency ranking/screening in [12] and a separate RBFN was trained for each contingency. In [2], both unsupervised and supervised learning is applied to RBFN in order to reduce the number of neural networks required for voltage contingency screening and ranking. An approach based on class separability index and correlation coefficient is used to select the relevant features for the RBFN. In [1], active and reactive loads on all PQ buses are considered as input set. In large scale power systems, the mentioned method generates large input training vectors and this leads to a low speed training process. In the present paper, three distinct feature extraction algorithms are proposed in order to reduce the input training vectors dimension and speedup the RBFN training. II. VOLTAGE STABILITY INDEX Minimum singular value of the load-flow Jacobian matrix (MSV) is proposed as an index for quantifying the proximity to the voltage collapse point. Right Singular Vector (RSV), corresponding to a minimum singular value of the Jacobian matrix, can be utilized for indicating sensitive voltages that identify the weakest node in the power system [5]. III. PROPOSED ALGORITHMS In this paper three different algorithms are proposed for feature extraction. These algorithms reduce the training time of the RBFN with acceptable accuracies. 1-4244-1478-4/07/$25.00 ©2007 IEEE

description

black out prevention using neural network

Transcript of project-8

Page 1: project-8

IEEE PES PowerAfrica 2007 Conference and ExpositionJohannesburg, South Africa, 16-20 July 2007

Online Voltage Stability Monitoring andContingency Ranking using RBF Neural Network

B. Moradzadeh, S.H. Hosseinian, M.R. Toosi and M.B MenhajDepartment of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Iran

Email: [email protected]

Abstract: Voltage stability is one of the major concerns incompetitive electricity markets. In this paper, RBF neural networkis applied to predict the static voltage stability index and rank thecritical line outage contingencies. Three distinctfeature extractionalgorithms are proposed to speedup the neural network trainingprocess via reducing the input training vectors dimensions. Basedon the weak buses identification method, the first developedalgorithm introduces a new feature extraction technique. Thesecond and third algorithms are based on Principal ComponentAnalysis (PCA) and Independent Component Analysis (ICA)respectively which are statistical methods. These algorithms offerbeneficial solutions for neural network training speedenhancement. In all presented algorithms, a clustering method isapplied to reduce the number of neural network training vectors.The simulation results for the IEEE-30 bus test systemdemonstrate the effectiveness of the proposed algorithms for on-line voltage stability indexprediction and contingency ranking.

I. INTRODUCTION

Voltage stability is defined as the ability of a powersystem to maintain steadily acceptable bus voltage at eachnode under normal operating conditions, after load increase,following system configuration changes or when the systemis being subjected to disturbances like line outage, generatoroutage and etc [1]. Voltage collapse may be caused by avariety of single or multiple contingencies known as voltagecontingencies in which the voltage stability of the powersystem is threatened [2]. Conventional evaluation techniquesbased on the full or reduced load flow Jacobian matrixanalysis such as, singular value decomposition, eigenvaluecalculations, sensitivity factor, and modal analysis are timeconsuming [3]-[5]. Therefore, they are not suitable for onlineapplications in large scale power systems. Since the 1990s,extensive research work has been carried out on theapplication of neural networks to power system problems [6].Artificial neural networks (ANNs) have shown great promisein power system engineering due to their ability to synthesizecomplex mappings accurately and fast. Most of the publishedwork in this area utilize multilayer perceptron (MLP) modelbased on back propagation (BP) algorithm, which usuallysuffers from local minima and overfitting problems.

Some research work has been devoted to neural networkapplications to voltage security assessment and monitoring[7]-[1 1]. Multi-layered feed-forward neural networks havebeen used for power margin estimation associated with static

voltage stability by means of different training criteria andalgorithms. The active and reactive powers on load andgeneration buses are frequently used as the inputs to themulti-layered feed-forward neural network [7]-[10]; busvoltages and angles are also part of the inputs in someresearch work [7], [9]; and in [11], the active and reactivepower flows of some selected lines are also used as inputs.Radial Basis Function Network (RBFN), with nonlinearmapping capability, has become increasingly popular inrecent years due to its simple structure and trainingefficiency. RBFN has only one nonlinear hidden layer andone linear output layer. RBFN has been applied for activepower contingency ranking/screening in [12] and a separateRBFN was trained for each contingency. In [2], bothunsupervised and supervised learning is applied to RBFN inorder to reduce the number of neural networks required forvoltage contingency screening and ranking. An approachbased on class separability index and correlation coefficient isused to select the relevant features for the RBFN. In [1],active and reactive loads on all PQ buses are considered asinput set. In large scale power systems, the mentionedmethod generates large input training vectors and this leads toa low speed training process. In the present paper, threedistinct feature extraction algorithms are proposed in order toreduce the input training vectors dimension and speedup theRBFN training.

II. VOLTAGE STABILITY INDEX

Minimum singular value of the load-flow Jacobian matrix(MSV) is proposed as an index for quantifying the proximityto the voltage collapse point. Right Singular Vector (RSV),corresponding to a minimum singular value of the Jacobianmatrix, can be utilized for indicating sensitive voltages thatidentify the weakest node in the power system [5].

III. PROPOSED ALGORITHMS

In this paper three different algorithms are proposed forfeature extraction. These algorithms reduce the training timeof the RBFN with acceptable accuracies.

1-4244-1478-4/07/$25.00 ©2007 IEEE

Page 2: project-8

A. First algorithm

In this algorithm, three groups of parameters areconsidered for feature extraction. The first group includes theactive and reactive loads on the weak PQ buses. Loadvariations on these buses have a great effect on the voltagestability index. The second important parameter groupconsists of the active and reactive loads on terminal buses ofcritical lines. This group must be considered in the input setto enhance the accuracy of the contingency ranking. The thirdparameter group is the ratio between the sum of the activeand reactive loads on the remained PQ buses to the sum oftheir base values of active and reactive loads.

B. Second algorithm

In this algorithm, Principal Component Analysis (PCA)method is employed in order to reduce the dimension of theneural network input training vectors.PCA is a way of identifying patterns in data, and

expressing the data in such a way so as to highlight theirsimilarities and differences. It describes the data set in termsof its variance. Each principal component describes apercentage of the total variance of a data set and elaboratesloadings or weights that each variate contributes to thisvariance. That is to say, the first principal component of adata set describes the greatest amount of variance in the dataset. The coefficients of the principal components quantify theloading or weight of each variate to that amount of variance.Since patterns can be hard to find in data of high dimension,the luxury of graphical representation is not available. Theother main advantage of PCA is that once you have foundthese patterns in the data, you can compress the data byreducing the number of dimensions, without much loss ofinformation [13]-[15]. Principal components can becalculated using eigenvectors and eigenvalues of covariancematrixes or correlation matrix.

n

Cov(X ) (Xik -Xi)(Xjk X) (1)

where n is the number of the input training sample data,i -1,2, ,m,j 1,2, ,m (m is the vector's dimension).

Xi is the average vector of all samples. All components canbe computed by the solution of:

Cwp = Apwp, p = 1,2, ,m (2)

C is the covariance matrix, wP is the p-th principal

component (eigenvector) and AP is the corresponding

eigenvalue. The AP are positive values which are proportionalto the fraction of the total variance accounted by each

component which have the important property of forming anorthogonal set. As PCA has the property of packing thegreatest energy into the least number of principalcomponents, principal components which are correspondingto eigenvalues less than a threshold can be discarded withminimal loss in representational capability. The coefficientsof the principal components of q-th vector are then given by:

m

aap = ,xqiwip, p = 1,2,...,z , q = 1,2,...,ni=l

(3)

where z is number of the eigenvalues which are biggerthen the threshold.

C. Third algorithm

In this section, we review the definition of IndependentComponent Analysis (ICA) as well as the difference betweenPCA and ICA. ICA is a statistical method for transformingmultidimensional vectors into components which arestatistically as independent from each other as possible [16].ICA was originally developed to deal with blind sourceseparation problems. But, ICA has been applied to manydifferent problems including exploratory data analysis, blindde-convolution, and feature extraction. A fundamentalproblem in signal processing or data mining is to find suitablerepresentations for image, audio or other kinds of data fortasks like compression and de-noising. ICA assumes eachobserved data vector to be a linear combination of unknownstatistically independent components. Let us denote by x then-dimensional observed data vector whose elements are themixture {x1 x2 ... x, } and likewise by s the m-dimensionalsource vector with elements {SI S2 ... s, }. Let us denote byA the mixing matrix with elements ai,. All vectors are

understood as colunm vectors. Using this notation, the ICAmixing model is written as

x = As (4)

The basic model of ICA is shown in Fig. 1. The ICAmodel is a generative model, which means that it describeshow the observed data is generated by a process of mixingthe components si. These components cannot be directlyobserved. Also, the mixing matrix is assumed to be unknown.All the data we can use is only the observed data vector x,and we should estimate both A and s using it. The startingpoint of ICA is the assumption that components si arestatistically independent. For simplicity, we also assume thatn is equal to m, but this assumption is sometimes relaxed. Asshown in equation (5), the recently developed technique ofICA can be used to estimate the unmixing matrixW based onthe independence of estimated independent components u[17]-[18].

Page 3: project-8

u = Wx (5)

The representation of observed data x using estimatedindependent components u is the inverse matrix of W. Wecan use that representation instead of original observed data xfor the inputs ofthe prediction model.

s 0

Fig. 1. The basic model of lCA.

D. Differences between PCA and ICA

The main goal of PCA is finding appropriate directionswithin data x that maximize variance and sometimes reducethe effects from noises. As a result, the dimensionality of thedata is reduced by using only principal components thatcontribute to the covariance, and it may lead to increasedvisibility. In the mean-square (second-order) error sense,PCA may be the optimal method for dimension reduction.However, the projections on the principal componentsderived by PCA may provide less information than onesderived by higher-order methods like ICA. Fig. 2 clearlyshows the difference between the directions determined byPCA and ICA on a bivariate data set. The directionsdetermined by PCA, i.e. principal components (PCs), areorthogonal to each other and are directed towards themaximum variance because of the basic assumption that thedistribution of the data is Gaussian. However, the directionsguided by ICA -independent components (ICs) -aredirected in different ways that provide a more meaningfulinterpretation of the given data than PCA. It is possiblebecause ICA does not assume GJaussianity of the observeddata [19].

IV. CLUSTERING ALGORITHM

In this paper, a clustering method is used in order toreduce the number of the neural network training vectors[20]. The procedure of the applied algorithm is exhibited inFig. 3. Center of each cluster is a representative for allvectors in that cluster, so the other vectors in the cluster arediscarded from the training vectors list.

There is not any specific rule to calculate theneighborhood radius, r, and it must be calculated by trial anderror method. The selected r must lead to an acceptableaccuracy and training speed.

PC 1

1~~~~~~~~P 2

\~~~~~~~~~~~~ MT

1~~~~~~~~~~I1^Jfiw_

T-s.1 T t _ T~IC

(b)

Fig. 2. (a) Principal component analysis and (b) independentcomponent analysis. Difference between PCA and ICA interpretations

on a bivariate data set.

V. RBFN STRUCTURE

Radial Basis Function network (RBFN) has been foundvery attractive for many engineering problems because theyhave a very compact topology and their locally neuronstuning capability leads to a high learning speed [21]. TheRBFN is a feed forward architecture with an input layer, ahidden layer and an output layer. The RBFN structure isshown in Fig. 4. The input layer units are fully connected tothe hidden layer units. In this structure, hidden nodes arenamed RBFN units. These units are fully connected to theoutput layer units.

Page 4: project-8

where di(X) is called the distance function of the i-th RBFNunit, X (xI,x2,...,x-n)T is an n-dimensional input featurevector, Ci is an n-dimensional vector called the center of thei-th RBFN unit, vi is the width of i-th RBFN unit and s is thenumber of the RBFN units. Typically, Gaussian function ischosen as the RBFN units activation function.

Ri (X) = exp[-d72 (X)] (9)

The output units are linear and therefore the j-th output unitfor input X is given by the equ. 10.

s

yj (X) = b(j) + ZRi(X)W2(j1,)i=l

(10)

where W2(, i) is the connection weight of the i-th RBFN unitto the j-th output node and bO) is the bias of the j-th output.The bias is omitted in this network in order to reduce thenetwork complexity. Therefore, equ. 10 can be contractedinto the simpler equ. 11.

s

yj (X)= Ri (X)W2 (j,1)i=l

(1 1)

VI. NUMERICAL RESULTS

Fig. 3. A flow chart of the applied clustering method. k is number ofthe input - output pairs, m is number of the clusters, X,tm is center of

the cluster number m and Clust is set of generated clusters.

XI yl(x)

Y-(x)

FiG. 4. RBF neural network structure.

The activation function of the RBF units is expressed as

follows [7]:

The 30-bus IEEE test system is selected to verify theeffectiveness of the proposed algorithms. It consists of 6generators, 21 PQ buses and 41 lines. Load flow program

converges for 37 line outages and critical lines are recognizedunder several loading conditions. In this paper, 11 numbers ofmost critical lines are considered for the study. Randomlychanging the loads on PQ buses between 50% and 150% oftheir base values, 2500 loading vectors are generated. 2000vectors are used for training and the remaining vectors are

used for the test. In [9] all active and reactive loads on PQbuses are considered for training. There are 21 PQ buses inIEEE-30 bus system and the mentioned method in [9]generates 42 dimensional input training vectors. This methodleads to a slow training process in large scale power systemsbecause of the large input set. Therefore, this method is notsuitable for large scale power systems. In the present article,three different algorithms are proposed to reduce thedimension of the input vectors and improve the training speedof the neural network.

A. First algorithm

Ri (X) = Ri (-d72 (X)) , i = 1,2,...,s

di (X) '

(7) As mentioned in section III-A, three groups of parametersare considered for feature extraction. The first group includes

(8) the value of the active and reactive loads on weak buses.

START

Page 5: project-8

Weak buses can be identified using the RSV index. The resultof the bus ranking for the 10 weakest buses is presented inTable 1. The higher the rank, the weaker the bus is. Inaddition to the weak buses, the buses which are the terminalsof the critical lines must be considered to enhance theaccuracy of contingency ranking. Finally, the selected busesare: 26, 29, 30, 24, 21, 15, 12, 10, 4 and 2. The active andreactive loads on these buses in addition to the thirdparameter, mentioned in III-A, should be considered forfeature extraction. Using this algorithm, the dimensions of theinput vectors is reduced from 42 to 22.

After all, clustering method is applied in order to reducethe number of the training vectors. Selecting a neighborhoodradius equal to 0.085, 1024 clusters (training vectors) arechosen for training. Therefore, number of the training vectorsis reduced from 2000 to 1024.

Table I. weak bus ranking

Rank Bus Index Rank Bus index1 26 0.2118 16 10 0.15372 30 0.2062 17 16 0.15133 29 0.2005 18 12 0.14674 25 0.1883 19 13 0.14415 24 0.1777 20 9 0.13786 27 0.1743 21 11 0.13587 19 0.1719 22 28 0.11388 23 0.1715 23 8 0.10579 18 0.1703 24 6 0.102610 20 0.1693 25 7 0.095111 22 0.1629 26 4 0.091212 21 0.1630 27 5 0.079813 15 0.1587 28 3 0.076314 14 0.1574 29 2 0.047115 17 0.1555 30 1 lE-5

B. Second algorithm

In this method, PCA is employed in order to reduce thedimension of the input training vectors. Using this algorithm,it is found that 20 eigenvalues are much bigger than theothers. Hence, their corresponding eigenvectors(components) are selected and after implementing the stepsdescribed in III-B, dimension of the vectors is reduced from42 to 20. 10 principal components (as a sample) and theircorresponding variances are shown in Fig. 5. Sum of the all42 variances corresponding to all 42 components is 100%.

Finally, the clustering method is applied in order to reducethe number of the vectors. Selecting a neighborhood radiusequal to 0.48, 1033 clusters (training vectors) are chosen fortraining. So, number of the training vectors is reduced from2000 to 1033.

80

70

fo

w

S 40

fo30

20

10

b1 2 3 4 5 b 7 8 9 10

Principal Component

Fig. 5. 10 principal components and their corresponding variances.

C. Third algorithm

In this section, ICA is applied for dimension reduction ofthe input training vectors. Two different ICA algorithms havebeen tested on the input training vectors. Then thedimensionally reduced vectors are trained by RBF neuralnetwork. These algorithms are Cardoso' s EquivariantAdaptive Separation via Independence (EASI) [22] and Belland Sejnowski's infomax algorithm (BSICA) [17]. Minimumsquare error and training time for both algorithms areexhibited in table II. These values are obtained for a specificdesired vector dimension (2000 20-dimensional vectors).Table II, shows that EASI algorithm leads to a more accuratetraining in comparison with BSICA. Hence, this algorithmhas been selected for the study. Using trial and erroralgorithm, it is found that 20 components from all 42components lead to an acceptable accuracy. So the dimensionof the input training vectors has been reduced from 42 to 20.

Table II. MSE and training speed corresponding to EASI and BSICA.

algorithm MSE training time hidden layer neuronsEASI 0.0035 25.42 50BSICA 0.0152 27.42 50

After that, clustering algorithm is applied to reduce thenumber of the training vectors. Selecting a neighborhoodradius equal to 1.65, 1024 clusters (training vectors) arechosen for training. The performance (Speed and accuracy)of the three proposed algorithms and their comparison withthe case that active and reactive loads at all PQ buses areconsidered for training (case 1) is shown in Table III. Notethat in case 1 there are 2000 42-dimensional input trainingvectors. Number of hidden layer neurons for each algorithmhas been obtained using trial and error method. This studydemonstrates that the first algorithm which is based on weakbus identification yields a more accurate and fast training

Page 6: project-8

procedure in comparison with PCA and ICA. In addition,ICA shows a better performance than PCA. Result of voltagestability index prediction for base case (system with nocontingencies) and contingency ranking for two differentloading conditions, using the proposed algorithms areexhibited in table IV. It is obvious that fast performance,accurate evaluation and good prediction accuracy for voltagestability index have been obtained.

Table 111. Performance of the proposed algorithms.

algorithm MSE training hidden layertime (s) neurons

case 1 0.0006 357.7 300weak bus 0.0031 7.4 50PCA 0.0104 92.5 300ICA 0.0043 7.4 50

VII. CONCLUSION

In this paper, RBF neural network is employed toprecisely predict the voltage stability index (MSV) andcontingency ranking in different loading conditions. Threedifferent algorithms are proposed in order to reduce thedimension and the number of the training vectors in order toimprove the speed of neural network training process. Firstalgorithm is a novel feature extraction approach based onpower system engineering concepts. Second and thirdmethods are based on statistical methods. These algorithmsexhibit good performance in voltage stability prediction andonline contingency ranking, while computing the MSV usingconventional methods is very time consuming for large scalepower systems.

Table IV. Result of voltage stability index prediction and online contingency ranking for two different loading conditions (a) and (b).

(a)

Load flow MSV base rank 2 11 12 9 3 10 4 7 8 6 50.1652 MSV 0.0233 0.0981 0.0991 0.1100 0.1310 0.1431 0.1439 0.1442 0.1503 0.1533 0.1570

case l MSV base rank 2 11 12 9 3 10 4 7 8 6 50.1653 MSV 0.0225 0.0982 0.0995 0.1101 0.1311 0.1432 0.1441 0.1443 0.1505 0.1534 0.1578

weak bus MSV base rank 2 11 12 9 3 10 4 7 8 6 50.1650 MSV 0.0231 0.1002 0.1006 0.1103 0.1319 0.1436 0.1445 0.1452 0.1518 0.1520 0.1573

PCA MSV base rank 2 11 12 9 3 10 4 7 6 8 5_0.1657 MSV 0.0250 0.0890 0.1015 0.1107 0.1333 0.1421 0.1437 0.1445 0.1513 0.1516 0.1593

ICA MSV base rank 2 11 12 9 3 10 4 7 8 6 50.1655 MSV 0.0214 0.0957 0.1017 0.1106 0.1324 0.1431 0.1435 0.1452 0.1511 0.1528 0.1555

(b)

Load flow MSV base rank 2 12 11 9 3 10 4 7 5 8 60.1627 MSV 0.0251 0.0816 0.1000 0.1090 0.1318 0.1391 0.1405 0.1423 0.1454 0.1492 0.1534

case l MSVbase rank 2 12 11 9 3 10 4 7 5 8 60.1627 MSV 0.0246 0.0817 0.1000 0.1090 0.1318 0.1392 0.1406 0.1424 0.1461 0.1492 0.1534

weak bus MSV base rank 2 12 11 9 3 10 4 7 5 8 60.1627 MSV 0.0232 0.0805 0.0983 0.1086 0.1298 0.1387 0.1391 0.1417 0.1449 0.1475 0.1529

PCA MSV base rank 2 12 11 9 3 10 4 7 5 8 60.1627 MSV 0.0255 0.0858 0.1020 0.1109 0.1343 0.1424 0.1435 0.1448 0.1516 0.1522 0.1540

ICA MSV base rank 2 12 11 9 3 10 4 7 5 8 60.1627 MSV 0.0225 0.0811 T0.1023 0.1107 0.1340 0.1383 0.1421 T0.1423 0.1454 0.1492 0.1535

VIII. REFERENCE

[1] S. Sahari, A. F. Abidin, and T. K. Abdulrahman, "Development ofArtificial Neural Network for Voltage Stability Monitoring",National Power and Energy Conference (PECon) 2003 Proceedings,Bangi, Malaysia.

[2] T. Jain, L. Srrivastava and S. N. Singh, "Fast Voltage ContingencyScreening using Radial Basis Function Neural Network," IEEETrans. Power Syst., vol. 18, no. 4, pp. 705-7015, Nov. 2003.

[3] M. M. Begovic, A. G. Phadke, "Control of Voltage Stability using

sensitivity analysis", IEEE Trans. On Power Systems, vol. 7, no. 1,pp. 114-123, Feb 1992.

[4] N. Flatabo, et al., "Voltage Stability Condition in a PowerTransmission calculated by Sensitivity Analysis", IEEE trans. OnPower Systems, vol. 5, no. 4, Nov, 1990, pp. 1286-1293.

[5] Y. L. Chen, C. W. Chang, C. C. L, "Efficient Methods forIdentifying Weak Nodes in Electrical Power Networks," IEE ProcGener. Transm. Distrib. Vol.142, no. 3, May 1995.

[6] M.T. Haque, A.M. Kashtiban, "Application of neural networks in powersystems - a review," Transactions on Engineering, Computing andTechnology, 2005, 6, pp. 53-57.

[7] M. L. Scala, M. Trovato and F. Torelli, "A neural network-based methodfor voltage security monitoring," IEEE Trans. Power Syst., 1996, 11,(3), pp. 1332-1341

Page 7: project-8

[8] D. Popvic, D. Kukolj and F. Kulic, "Monitoring and assessment ofvoltage stability margins using artificial neural networks with a reducedinput set," IEE Proc. Gener. Transm. Distrib., 1998, 145, (4), pp. 355-362

[9] H.B. Wan and Y.H. Song, "Hybrid supervised and unsupervised neuralnetwork approach to voltage stability analysis," Electric Power SystemsResearch, 1998, 47, (2), pp.115-122

[10] L. Srivastava, S.N. Singh, and J. Sharma, "Estimation of loadabilitymargin using parallel self-organizing hierarchical neural network,"Computers and Electrical Engineering, 2000, 26, (2), pp. 151-167

[1 1] S. Chakrabarti, and B. Jeyasurya, "On-line voltage stability monitoringusing artificial neural network," Proc. 2004 Large Engineering SystemsConference on Power Engineering, Westin Nova Scotian, Canada, July2004, pp. 71-75

[12] D. K. Ranaweera and G. G. Karady, "Active power contingencyranking using a radial basis function network," Int. J. Eng. Intell.Syst. for Elect. Eng. Communications, vol. 2, no. 3, pp. 201-206,Sept. 1994.

[13] D.F. Morrison, Multivariate Statistical Methods, McGraw-Hill,New York, 1976.

[14] L.I. Smith, A tutorial on principal components analysis, 26 February2002,(http://kybele.psych.cornell.edu/ edelman/Psych-465-spring-2003/PCA-tutorial).

[15] R.B. Panerai, A. Luisa, A.S. Ferreira, O.F. Brum, "Principalcomponent analysis of multiple noninvasive blood flow derivedsignals," IEEE Trans. Biomed. Eng. 35 (1998) 7.

[16] P. Comon, Independent component analysis - a new concept?Signal Processing, 36(3), 287-314. 1994

[17] A.J. Bell and T.J. Sejnowski, "An information-maximizationapproach to blind separation and blind deconvolution," NeuralComputation, 7(6), 1129-1159, 1995

[18] Z. Roth, and Y. Baram, "Multidimensional density shaping bysigmoids," IEEE Transactions on Neural Networks, 7(5), 1291-1298, 1996

[19] M. Kermit, and 0. Tomic, "Independent component analysisapplied on gas sensor array measurement data," IEEE SensorsJournal, 3(2), 218-228, 2003

[20] Spath, H., Cluster Dissection and Analysis: Theory, FORTRANPrograms, Examples, translated by J. Goldschmidt, Halsted Press,New York, 1985, 226 pp.

[21] J. Haddadnia and K. Faez, "Neural network human face recognitionbased on moment invariants", Proceeding of IEEE InternationalConference on Image Processing, Thessaloniki, Greece, pp. 1018-1021, 7-10 October 2001.

[22] J.G. Cradoso, B.H. Laheld, "Equivariant adaptive sourceseparation," IEEE Transaction on Signal Processing, Vol. 44, issue12, Dec. 1996.