Radialbasisfunction 140130030043 Phpapp02

19
CSC 367 2.0 Mathematical Computing Assignment 3 Radial Basis Functions AS2010377 M.K.H.Gunasekara Special Part 1 Department of Computer Science UNIVERSITY OF SRI JAYEWARDENEPURA

description

matlab code , RBFN,RBFNN

Transcript of Radialbasisfunction 140130030043 Phpapp02

  • CSC 367 2.0 Mathematical Computing

    Assignment 3

    Radial Basis Functions

    AS2010377

    M.K.H.Gunasekara

    Special Part 1 Department of Computer Science

    UNIVERSITY OF SRI JAYEWARDENEPURA

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    1 | P a g e

    Table of Contents

    -

    Introduction ............................................................................................................................................ 2

    Methodology ........................................................................................................................................... 3

    Implementation ...................................................................................................................................... 5

    Results ..................................................................................................................................................... 6

    Discussion.............................................................................................................................................. 10

    Appendices ............................................................................................................................................ 11

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    2 | P a g e

    Introduction

    Neural Networks offer a powerful framework for representing nonlinear mappings from

    several inputs to one or more outputs.

    An important application of neural networks is regression. Instead of mapping the inputs

    into a discrete class label, the neural network maps the input variables into continuous

    values. A major class of neural networks is the radial basis function (RBF) neural network.

    We will look at the architecture of RBF neural networks, followed by its applications in both

    regression and classification.

    In this report Radial Basis function is discussed for clustering as unsupervised learning

    algorithm. Radial basis function is simulated to cluster three flowers in a given data set

    which is available in http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data.

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    3 | P a g e

    Methodology Radial Basis Function

    Figure 01 : One hidden layer with Radial Basis Activation Functions

    Radial basis function (RBF) networks typically have three layers

    1. Input Layer

    2. A hidden layer with a non-linear RBF activation function

    3. Output Layer

    Where N is the number of neurons in the hidden layer, is the center vector for neuron i, and is

    the weight of neuron i in the linear output neuron. Functions that depend only on the distance from

    a center vector are radially symmetric about that vector, hence the name radial basis function. In the

    basic form all inputs are connected to each hidden neuron. The norm is typically taken to be the

    Euclidean distance and the radial basis function is commonly taken to be Gaussian Function

    ( ) (

    ) ------ (1)

    There are some other Radial Basis functions

    Logistic Basis Function

    ( )

    ( )

    Multi-quadratics

    ( )

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    4 | P a g e

    Input nodes connected by weights to a set of RBF neurons fire proportionately to the distance

    between the input and the neuron in the weight space

    The activation of these nodes is used as inputs to the second layer. The second layer (output layer) is

    treated as a simple Perceptron network

    Training the RBF Network This can be done positioning the RBF nodes and using the activation of RBF nodes to train the linear

    outputs.

    Positioning RBF nodes can be done in two ways; First method is randomly picking some of the data

    points to act as basis functions. And the second method is trying to position the nodes so that they

    are representative of typical inputs, like using k-means clustering algorithm.

    In Activation function there is standard deviation parameter.

    One option is, giving all nodes the same size, and testing lots of different sizes using a validation set

    to select one that works. Alternatively we can select the size of RBF nodes so that the whole space is

    coved by the receptive fields. So the width of the Gaussian should be set according to the maximum

    distance between the locations of the hidden nodes (d), and the number of hidden nodes (M)

    ------ (2)

    We can use this normalized Gaussian function also.

    ( ) (

    )

    (

    )

    ------ (3)

    Outputs of the RBF Network: (

    )

    Training the Perceptron Network We can train Pereceptron Network by using supervised learning method. Therefore we train the

    MLP Network according to targets.

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    5 | P a g e

    Implementation

    Implementation was done using MATLAB 7.10 (2010). Implementation was done according to

    following methods

    1. Locate RBF nodes into centers

    2. Calculate for the Gaussian function

    3. Calculate outputs of the RBF layer Unsupervised Training

    4. Make Perceptron Network for second layer ( I used MLP network without a hidden layer)

    5. Train MLP Network according to targets and inputs (inputs are the output of RBF network)

    Supervised Training

    6. Simulate the network

    I have implement RBF Network with different strategies to compare the results

    Using Randomly selected centers

    Using K-Means Cluster centers

    Using Non-normalized Gaussian function

    Using Normalized Gaussian function

    Using SVM for second layer

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    6 | P a g e

    Results

    sepal length sepal width petal length petal width Expected Target Actual Output

    5.1 3.5 1.4 0.2 Iris-setosa Iris-setosa

    4.9 3 1.4 0.2 Iris-setosa Iris-setosa

    4.7 3.2 1.3 0.2 Iris-setosa Iris-setosa

    4.6 3.1 1.5 0.2 Iris-setosa Iris-setosa

    5 3.6 1.4 0.2 Iris-setosa Iris-setosa

    5.4 3.9 1.7 0.4 Iris-setosa Iris-setosa

    4.6 3.4 1.4 0.3 Iris-setosa Iris-setosa

    5 3.4 1.5 0.2 Iris-setosa Iris-setosa

    4.4 2.9 1.4 0.2 Iris-setosa Iris-setosa

    4.9 3.1 1.5 0.1 Iris-setosa Iris-setosa

    5.4 3.7 1.5 0.2 Iris-setosa Iris-setosa

    4.8 3.4 1.6 0.2 Iris-setosa Iris-setosa

    4.8 3 1.4 0.1 Iris-setosa Iris-setosa

    4.3 3 1.1 0.1 Iris-setosa Iris-setosa

    5.8 4 1.2 0.2 Iris-setosa Iris-setosa

    5.7 4.4 1.5 0.4 Iris-setosa Iris-setosa

    5.4 3.9 1.3 0.4 Iris-setosa Iris-setosa

    5.1 3.5 1.4 0.3 Iris-setosa Iris-setosa

    5.7 3.8 1.7 0.3 Iris-setosa Iris-setosa

    5.1 3.8 1.5 0.3 Iris-setosa Iris-setosa

    5.4 3.4 1.7 0.2 Iris-setosa Iris-setosa

    5.1 3.7 1.5 0.4 Iris-setosa Iris-setosa

    4.6 3.6 1 0.2 Iris-setosa Iris-setosa

    5.1 3.3 1.7 0.5 Iris-setosa Iris-setosa

    4.8 3.4 1.9 0.2 Iris-setosa Iris-setosa

    5 3 1.6 0.2 Iris-setosa Iris-setosa

    5 3.4 1.6 0.4 Iris-setosa Iris-setosa

    5.2 3.5 1.5 0.2 Iris-setosa Iris-setosa

    5.2 3.4 1.4 0.2 Iris-setosa Iris-setosa

    4.7 3.2 1.6 0.2 Iris-setosa Iris-setosa

    4.8 3.1 1.6 0.2 Iris-setosa Iris-setosa

    5.4 3.4 1.5 0.4 Iris-setosa Iris-setosa

    5.2 4.1 1.5 0.1 Iris-setosa Iris-setosa

    5.5 4.2 1.4 0.2 Iris-setosa Iris-setosa

    4.9 3.1 1.5 0.1 Iris-setosa Iris-setosa

    5 3.2 1.2 0.2 Iris-setosa Iris-setosa

    5.5 3.5 1.3 0.2 Iris-setosa Iris-setosa

    4.9 3.1 1.5 0.1 Iris-setosa Iris-setosa

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    7 | P a g e

    4.4 3 1.3 0.2 Iris-setosa Iris-setosa

    5.1 3.4 1.5 0.2 Iris-setosa Iris-setosa

    5 3.5 1.3 0.3 Iris-setosa Iris-setosa

    4.5 2.3 1.3 0.3 Iris-setosa Iris-setosa

    4.4 3.2 1.3 0.2 Iris-setosa Iris-setosa

    5 3.5 1.6 0.6 Iris-setosa Iris-setosa

    5.1 3.8 1.9 0.4 Iris-setosa Iris-setosa

    4.8 3 1.4 0.3 Iris-setosa Iris-setosa

    5.1 3.8 1.6 0.2 Iris-setosa Iris-setosa

    4.6 3.2 1.4 0.2 Iris-setosa Iris-setosa

    5.3 3.7 1.5 0.2 Iris-setosa Iris-setosa

    5 3.3 1.4 0.2 Iris-setosa Iris-setosa

    7 3.2 4.7 1.4 Iris-versicolor FALSE

    6.4 3.2 4.5 1.5 Iris-versicolor Iris-versicolor

    6.9 3.1 4.9 1.5 Iris-versicolor FALSE

    5.5 2.3 4 1.3 Iris-versicolor Iris-versicolor

    6.5 2.8 4.6 1.5 Iris-versicolor Iris-versicolor

    5.7 2.8 4.5 1.3 Iris-versicolor Iris-versicolor

    6.3 3.3 4.7 1.6 Iris-versicolor Iris-versicolor

    4.9 2.4 3.3 1 Iris-versicolor Iris-versicolor

    6.6 2.9 4.6 1.3 Iris-versicolor Iris-versicolor

    5.2 2.7 3.9 1.4 Iris-versicolor Iris-versicolor

    5 2 3.5 1 Iris-versicolor Iris-versicolor

    5.9 3 4.2 1.5 Iris-versicolor Iris-versicolor

    6 2.2 4 1 Iris-versicolor Iris-versicolor

    6.1 2.9 4.7 1.4 Iris-versicolor Iris-versicolor

    5.6 2.9 3.6 1.3 Iris-versicolor Iris-versicolor

    6.7 3.1 4.4 1.4 Iris-versicolor Iris-versicolor

    5.6 3 4.5 1.5 Iris-versicolor Iris-versicolor

    5.8 2.7 4.1 1 Iris-versicolor Iris-versicolor

    6.2 2.2 4.5 1.5 Iris-versicolor Iris-versicolor

    5.6 2.5 3.9 1.1 Iris-versicolor Iris-versicolor

    5.9 3.2 4.8 1.8 Iris-versicolor Iris-versicolor

    6.1 2.8 4 1.3 Iris-versicolor Iris-versicolor

    6.3 2.5 4.9 1.5 Iris-versicolor FALSE

    6.1 2.8 4.7 1.2 Iris-versicolor Iris-versicolor

    6.4 2.9 4.3 1.3 Iris-versicolor Iris-versicolor

    6.6 3 4.4 1.4 Iris-versicolor Iris-versicolor

    6.8 2.8 4.8 1.4 Iris-versicolor FALSE

    6.7 3 5 1.7 Iris-versicolor FALSE

    6 2.9 4.5 1.5 Iris-versicolor Iris-versicolor

    5.7 2.6 3.5 1 Iris-versicolor Iris-versicolor

    5.5 2.4 3.8 1.1 Iris-versicolor Iris-versicolor

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    8 | P a g e

    5.5 2.4 3.7 1 Iris-versicolor Iris-versicolor

    5.8 2.7 3.9 1.2 Iris-versicolor Iris-versicolor

    6 2.7 5.1 1.6 Iris-versicolor FALSE

    5.4 3 4.5 1.5 Iris-versicolor Iris-versicolor

    6 3.4 4.5 1.6 Iris-versicolor Iris-versicolor

    6.7 3.1 4.7 1.5 Iris-versicolor FALSE

    6.3 2.3 4.4 1.3 Iris-versicolor Iris-versicolor

    5.6 3 4.1 1.3 Iris-versicolor Iris-versicolor

    5.5 2.5 4 1.3 Iris-versicolor Iris-versicolor

    5.5 2.6 4.4 1.2 Iris-versicolor Iris-versicolor

    6.1 3 4.6 1.4 Iris-versicolor Iris-versicolor

    5.8 2.6 4 1.2 Iris-versicolor Iris-versicolor

    5 2.3 3.3 1 Iris-versicolor Iris-versicolor

    5.6 2.7 4.2 1.3 Iris-versicolor Iris-versicolor

    5.7 3 4.2 1.2 Iris-versicolor Iris-versicolor

    5.7 2.9 4.2 1.3 Iris-versicolor Iris-versicolor

    6.2 2.9 4.3 1.3 Iris-versicolor Iris-versicolor

    5.1 2.5 3 1.1 Iris-versicolor Iris-versicolor

    5.7 2.8 4.1 1.3 Iris-versicolor Iris-versicolor

    6.3 3.3 6 2.5 Iris-virginica Iris-virginica

    5.8 2.7 5.1 1.9 Iris-virginica Iris-virginica

    7.1 3 5.9 2.1 Iris-virginica Iris-virginica

    6.3 2.9 5.6 1.8 Iris-virginica Iris-virginica

    6.5 3 5.8 2.2 Iris-virginica Iris-virginica

    7.6 3 6.6 2.1 Iris-virginica Iris-virginica

    4.9 2.5 4.5 1.7 Iris-virginica FALSE

    7.3 2.9 6.3 1.8 Iris-virginica Iris-virginica

    6.7 2.5 5.8 1.8 Iris-virginica Iris-virginica

    7.2 3.6 6.1 2.5 Iris-virginica Iris-virginica

    6.5 3.2 5.1 2 Iris-virginica Iris-virginica

    6.4 2.7 5.3 1.9 Iris-virginica Iris-virginica

    6.8 3 5.5 2.1 Iris-virginica Iris-virginica

    5.7 2.5 5 2 Iris-virginica Iris-virginica

    5.8 2.8 5.1 2.4 Iris-virginica Iris-virginica

    6.4 3.2 5.3 2.3 Iris-virginica Iris-virginica

    6.5 3 5.5 1.8 Iris-virginica Iris-virginica

    7.7 3.8 6.7 2.2 Iris-virginica Iris-virginica

    7.7 2.6 6.9 2.3 Iris-virginica Iris-virginica

    6 2.2 5 1.5 Iris-virginica Iris-virginica

    6.9 3.2 5.7 2.3 Iris-virginica Iris-virginica

    5.6 2.8 4.9 2 Iris-virginica Iris-virginica

    7.7 2.8 6.7 2 Iris-virginica Iris-virginica

    6.3 2.7 4.9 1.8 Iris-virginica Iris-virginica

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    9 | P a g e

    6.7 3.3 5.7 2.1 Iris-virginica Iris-virginica

    7.2 3.2 6 1.8 Iris-virginica Iris-virginica

    6.2 2.8 4.8 1.8 Iris-virginica Iris-virginica

    6.1 3 4.9 1.8 Iris-virginica Iris-virginica

    6.4 2.8 5.6 2.1 Iris-virginica Iris-virginica

    7.2 3 5.8 1.6 Iris-virginica Iris-virginica

    7.4 2.8 6.1 1.9 Iris-virginica Iris-virginica

    7.9 3.8 6.4 2 Iris-virginica Iris-virginica

    6.4 2.8 5.6 2.2 Iris-virginica Iris-virginica

    6.3 2.8 5.1 1.5 Iris-virginica Iris-virginica

    6.1 2.6 5.6 1.4 Iris-virginica Iris-virginica

    7.7 3 6.1 2.3 Iris-virginica Iris-virginica

    6.3 3.4 5.6 2.4 Iris-virginica Iris-virginica

    6.4 3.1 5.5 1.8 Iris-virginica Iris-virginica

    6 3 4.8 1.8 Iris-virginica FALSE

    6.9 3.1 5.4 2.1 Iris-virginica Iris-virginica

    6.7 3.1 5.6 2.4 Iris-virginica Iris-virginica

    6.9 3.1 5.1 2.3 Iris-virginica Iris-virginica

    5.8 2.7 5.1 1.9 Iris-virginica Iris-virginica

    6.8 3.2 5.9 2.3 Iris-virginica Iris-virginica

    6.7 3.3 5.7 2.5 Iris-virginica Iris-virginica

    6.7 3 5.2 2.3 Iris-virginica Iris-virginica

    6.3 2.5 5 1.9 Iris-virginica Iris-virginica

    6.5 3 5.2 2 Iris-virginica Iris-virginica

    6.2 3.4 5.4 2.3 Iris-virginica Iris-virginica

    5.9 3 5.1 1.8 Iris-virginica Iris-virginica

    I found best results using RBF Network with Non-Normalized Gaussian activation function with 9

    mismatches. And I found best results using MLP Network with 4 mismatches.

    MLP Network as Second Layer

    Random Center K Means Center

    Non-Normalized Gaussian function

    9 9

    Normalized Gaussian function 11 11

    Support Vector Machine as Second Layer

    Random Center K Means Center

    Non-Normalized Gaussian function

    14 10

    Normalized Gaussian function 14 17

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    10 | P a g e

    Discussion

    1. There are some drawbacks of unsupervised center selection in radial basis functions 2. We can use an SVM for the second layer instead of a perceptron but it is not efficient for more

    than 2 classes classification

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    11 | P a g e

    Appendices MATLAB Sourcecode for RBF Network with MLP Network

    clc clear all % M.K.H. Gunasekara % AS2010377 % Machine Learning % Radial Basis Function [arr tx] = xlsread('data.xls'); Centers=zeros(3,4);

    % I found centers as mean of the same cluster values

    for i=1:50 Centers(1,1)=arr(i,1)+Centers(1,1); Centers(1,2)=arr(i,2)+Centers(1,2); Centers(1,3)=arr(i,3)+Centers(1,3); Centers(1,4)=arr(i,4)+Centers(1,4); end

    for i=51:100 Centers(2,1)=arr(i,1)+Centers(2,1); Centers(2,2)=arr(i,2)+Centers(2,2); Centers(2,3)=arr(i,3)+Centers(2,3); Centers(2,4)=arr(i,4)+Centers(2,4); end

    for i=101:150 Centers(3,1)=arr(i,1)+Centers(3,1); Centers(3,2)=arr(i,2)+Centers(3,2); Centers(3,3)=arr(i,3)+Centers(3,3); Centers(3,4)=arr(i,4)+Centers(3,4); end

    for j= 1:3 Centers(j,1)=Centers(j,1)/50; Centers(j,2)=Centers(j,2)/50; Centers(j,3)=Centers(j,3)/50; Centers(j,4)=Centers(j,4)/50;

    end

    Centers

    % OR we can use k means algorithms calculate cluster centers k=3; %number of clusters

    [IDX,C]=kmeans(arr,k); C %RBF centres

    %Uncomment following line to use k means %Centers=C;

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    12 | P a g e

    % distance between hidden nodes

    %distance between hidden node 1 & 2 dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 +

    (Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2);

    %distance between hidden node 1 & 3 dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 +

    (Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2);

    %distance between hidden node 3 & 2 dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 +

    (Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2);

    % finding maximum distance maxdist=0; if ( dist1>dist2) & (dist1>dist3) maxdist=dist1; end

    if ( dist2>dist1) & (dist2>dist3) maxdist=dist2; end

    if ( dist3>dist1) & (dist3>dist2) maxdist=dist3; end

    % calculating width sigma= maxdist/sqrt(2*3);

    maxdist;

    % Gaussian

    %calculating outputs of RBF networks RBFoutput=zeros(150,3); d1=zeros(1,4); Centers; d=zeros(1,3); %Unnormalized method % calculate output for gaussian function

    %Uncomment following lines (98-106) to use Non-Normalized Activation %functions % for i=1:150 for j=1:3 d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +

    (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;

    RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2)))); end

    end

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    13 | P a g e

    % %

    %Normalized method %Summation

    %Uncomment following lines (114-130) to use Gaussian Normalized Activation

    functions % RBFNormSum=zeros(150,1); % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +

    (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1); % end % % d=[0 0 0]; % end % % % calculate output for gaussian function % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +

    (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % % RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1); % end % % d=[0 0 0]; % end

    RBFoutput

    RBFo=RBFoutput.' % making MLP network % T=zeros(1,150);

    T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

    1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

    2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3

    3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3

    3]

    S=[3 1]

    ;

    R=[0 1;0 1;0 1]

    % used feedforward neural network as MLP [3 1] MLPnet=newff(RBFo,S);

    MLPnet.trainParam.epochs = 500; MLPnet.trainParam.lr = 0.1; MLPnet.trainParam.mc = 0.9; MLPnet.trainParam.show = 40;

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    14 | P a g e

    MLPnet.trainParam.perf = 'mse'; MLPnet.trainParam.goal = 0.001; MLPnet.trainParam.min_grad = 0.00001; MLPnet.trainParam.max_fail=4;

    MLPnet = train(MLPnet,RBFo,T);

    %simulating neural network y=sim(MLPnet,RBFo); output=round(y.'); Target=T.'; compare= [T.' output] count=0; for i=1:150 if(output(i)~=Target(i)) count=count+1; end end Unmatched=count

    MATLAB Source code for RBF Network with SVM

    clc clear all % M.K.H. Gunasekara % AS2010377 % Machine Learning % Radial Basis Function with Support Vector Machine [arr tx] = xlsread('data.xls'); Centers=zeros(3,4);

    % I found centers as mean of the same cluster values

    for i=1:50 Centers(1,1)=arr(i,1)+Centers(1,1); Centers(1,2)=arr(i,2)+Centers(1,2); Centers(1,3)=arr(i,3)+Centers(1,3); Centers(1,4)=arr(i,4)+Centers(1,4); end

    for i=51:100 Centers(2,1)=arr(i,1)+Centers(2,1); Centers(2,2)=arr(i,2)+Centers(2,2); Centers(2,3)=arr(i,3)+Centers(2,3); Centers(2,4)=arr(i,4)+Centers(2,4); end

    for i=101:150 Centers(3,1)=arr(i,1)+Centers(3,1); Centers(3,2)=arr(i,2)+Centers(3,2); Centers(3,3)=arr(i,3)+Centers(3,3);

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    15 | P a g e

    Centers(3,4)=arr(i,4)+Centers(3,4); end

    for j= 1:3 Centers(j,1)=Centers(j,1)/50; Centers(j,2)=Centers(j,2)/50; Centers(j,3)=Centers(j,3)/50; Centers(j,4)=Centers(j,4)/50;

    end

    Centers

    % OR we can use k means algorithms calculate cluster centers k=3; %number of clusters

    [IDX,C]=kmeans(arr,k); C %RBF centres

    %Uncomment following line to use k means Centers=C;

    % distance between hidden nodes

    %distance between hidden node 1 & 2 dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 +

    (Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2);

    %distance between hidden node 1 & 3 dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 +

    (Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2);

    %distance between hidden node 3 & 2 dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 +

    (Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2);

    % finding maximum distance maxdist=0; if ( dist1>dist2) & (dist1>dist3) maxdist=dist1; end

    if ( dist2>dist1) & (dist2>dist3) maxdist=dist2; end

    if ( dist3>dist1) & (dist3>dist2) maxdist=dist3; end

    % calculating width sigma= maxdist/sqrt(2*3);

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    16 | P a g e

    maxdist;

    % Gaussian

    %calculating outputs of RBF networks RBFoutput=zeros(150,3); d1=zeros(1,4); Centers;

    %Unnormalized method % calculate output for gaussian function

    %Uncomment following lines (98-106) to use Non-Normalized Activation %functions d=zeros(1,3); for i=1:150 for j=1:3 d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +

    (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;

    RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2)))); end % d=[0 0 0]; end %

    %Normalized method %Summation

    %Uncomment following lines (114-130) to use Gaussian Normalized Activation

    functions % RBFNormSum=zeros(150,1); % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +

    (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1); % end % % d=[0 0 0]; % end % % % calculate output for gaussian function % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +

    (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % % RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1); % end % % d=[0 0 0]; % end

    RBFoutput

    RBFo=RBFoutput.' % making SVM network

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    17 | P a g e

    group=cell(3,1) group{1,1}=zeros(150,1);

    for n=1:150; tclass(n,1)=tx(n,5); end

    group{1,1}=ismember(tclass,'Iris-setosa') group{2,1}=ismember(tclass,'Iris-versicolor') group{3,1}=ismember(tclass,'Iris-virginica')

    [train, test] = crossvalind('holdOut',group{1,1}); cp = classperf(group{1,1});

    for i=1:3 %svmStruct(i) =

    svmtrain(RBFoutput(train,:),group{i,1}(train),'showplot',true); svmStruct(i) = svmtrain(RBFoutput,group{i,1},'showplot',true); end

    for j=1:size(RBFoutput) for k=1:3 if(svmclassify(svmStruct(k),RBFoutput(j,:))) break; end end result(j) = k; end

    T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

    1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

    2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3

    3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3

    3]

    compare=[T.' result.']

    Target=T.' output=result.' count=0; for i=1:150 if(output(i)~=Target(i)) count=count+1; end end Unmatched=count

    MATLAB Source Code MLP Network

    clc clear all % M.K.H. Gunasekara % AS2010377 % Machine Learning % MLP Network [arr tx] = xlsread('data.xls');

  • M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing

    18 | P a g e

    inputs=arr.'; T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

    1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

    2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3

    3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3

    3]

    %Multilayer network with hidden layer with 3 nodes MLPnet=newff(inputs,[4 3 1]);

    MLPnet.trainParam.epochs = 500; MLPnet.trainParam.lr = 0.1; MLPnet.trainParam.mc = 0.9; MLPnet.trainParam.show = 40; MLPnet.trainParam.perf = 'mse'; MLPnet.trainParam.goal = 0.001; MLPnet.trainParam.min_grad = 0.00001; MLPnet.trainParam.max_fail=4;

    MLPnet = train(MLPnet,inputs,T);

    %simulating neural network y=sim(MLPnet,inputs); output=round(y.'); Target=T.'; compare= [T.' output] count=0; for i=1:150 if(output(i)~=Target(i)) count=count+1; end end Unmatched=count