2.An Introduction to Neural Networks © 2013 Lower Columbia College

download 2.An Introduction to Neural Networks © 2013 Lower Columbia College

of 13

Transcript of 2.An Introduction to Neural Networks © 2013 Lower Columbia College

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    1/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    LCC HOME | CLASSES | CONTACT US SEARCH | A - Z | QUICK FIND

    PROCESS OF BACKPROPAGATION

    Neural Networks

    The Feed Forward Perceptron with Backpropagation

    Introduction

    This article describes learning process of 3-layer Perceptron Neural Network employing the

    backpropagation teaching algorithm. To illustrate this process the we will use a three layer neuralnetwork with 3 inputs nodes, 2 hidden nodes and 1 output node, which is shown in the picture below:

    Feedback

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://www.lowercolumbia.edu/http://lcc.ctc.edu/classeshttp://lcc.ctc.edu/info/contactus.xtmhttp://lcc.ctc.edu/searchhttp://lowercolumbia.edu/nr/exeres/BC73AB4B-1AF6-4E63-BF5C-792CCE2CF181http://lcc.ctc.edu/scripts/staffinfo.exe?mailto=2087?subject=Feedback:http://lowercolumbia.edu/students/academics/facultyPages/rhode-cary/backpropagation.htm
  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    2/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Each node, or artificial neuron (Threshold Logic Unit), is composed of two sections. The first section

    generates a sum of the products of the weights multipliers and input signals. The Second section

    takes the result of the First section and puts it thru itsActivation function, with scales the its input

    to a value between 0 and 1.

    Signal e is the output of the First section, and y = f(e) is the output of the Second section. Signal

    yis also the output signal of an artificial neuron.

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    3/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Workings of the Perceptron

    Feed Forward - Processing a Result from Inputs

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    4/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Our Perceptron processes information by taking the inputs, x1, x2 and x3, and propagates them thru

    each neuron of the perceptron one layer at a time:

    There obviously is no summing at the Input Layer, but the Activation function may still be needed to

    scale the input between 0 and 1. Thus the outputs from the Input Layer are y1, y2 and y3. Then the

    output from each input layer is multiplied by the weight of each interconnect as it is sent to each

    neuron the next layer, the Hidden Layer.

    (Please note that there will be some accompanying code snippets to the various stages of

    the perceptron processing and backpropagation algorithm.)

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    5/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    #define sigmoid(x) 1/(1 + exp(-(double)x))

    // For Each Hidden node:

    for(h=0; h

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    6/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    // For Each Output node:

    for(o=0; o

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    7/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Modification is calculated using the algorithm that we call Backpropagation.

    The remainder of this article is a description of one iteration of the Backpropagation process.

    We start the Backpropagation process by producing a delta for each output. Each delta is the

    difference of actual output, y, with the desired output,z:

    // Find d (deltas) for Each Output node

    for(o=0; o

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    8/13

    pdfcrowd comopen in browser PRO version Are you a developer? Try out the HTML to PDF API

    Once we have the deltas for each output, we can use each delta to modify the weights associated with

    each input to each respective output using the formula described in the previous section:

    // Update Wts of Hidden to Output nodes

    for(h=0; h

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    9/13

    df d mi b PRO i Are you a developer? Try out the HTML to PDF API

    Once we have adjusted the HiddenToOutput weights, we backpropagate to the InputToHidden

    weights. Since the Hidden Layer is hidden, there is no way to have expected values for any Hidden

    Layer in the training data set. Thus, each Hidden Layer delta is produced by the sum of the products

    of each Output delta by the weight of the interconnect between that Output and the respective Hidden

    Layer neuron. Our perceptron was originally shown with 1 output. However, here we imagine other

    outputs to illustrate how the Hidden deltas are formed by the sum of the products of each output

    delta with its weight multiplier:

    sum = 0.0; // Comput deltaHidden[h] for Input Wt modificationfor(h=0; h

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    10/13

    df di b PRO i Are you a developer? Try out the HTML to PDF API

    }

    Change InputToHidden Weights

    With the Hidden deltas defined, we can complete the learning iteration by changing the

    InputToHidden weight multipliers. The InputToHidden weights are modified by applying the same

    formula that we used to change the HiddenToOuput weight multipliers.

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdf
  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    11/13

    df di b PRO i A d l ? T t th HTML t PDF API

    // Update Wts of Input to Hidden nodes

    for(i=0; i

  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    12/13df di b PRO i A d l ? T t th HTML t PDF API

    Copyleft 2010 - Feel free to use for educational purposes

    by Cary Rhode, Math instructor at Lower Columbia College, Longview, WA, USA

    and all around Great Guy [email protected]

    Special acknowledgement to:

    Mariusz Bernacki

    Przemyslaw Wlodarczyk

    mgr inz. Adam Golda (2005)

    Katedra Elektroniki AGH

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://www.gnu.org/copyleft/mailto:[email protected]:[email protected]:[email protected]
  • 7/28/2019 2.An Introduction to Neural Networks 2013 Lower Columbia College

    13/13df di b PRO i

    Are you a developer? Try out the HTML to PDF API

    for use of their fundamental illustrations from their web site:

    Principles of Backpropagation

    Modifications were made to each illustration to match

    the 3-layer Perceptron configuration that was used here.

    Affirmative Action & Website Privacy Policies | LCC Home | Contact Us | Feedback 2013 Lower Columbia College

    VISITS: 2599 SINCE 2/8/2010

    http://pdfcrowd.com/http://pdfcrowd.com/redirect/?url=http%3a%2f%2flowercolumbia.edu%2fstudents%2facademics%2ffacultyPages%2frhode-cary%2fbackpropagation.htm&id=ma-130612052427-9e5f0df9http://pdfcrowd.com/customize/http://pdfcrowd.com/html-to-pdf-api/?ref=pdfhttp://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.htmlhttp://lowercolumbia.edu/students/policyhttp://lowercolumbia.edu/http://lcc.ctc.edu/info/contactus.xtmhttp://lcc.ctc.edu/scripts/staffinfo.exe?sendMsg=3296?subject=Feedback:/students/academics/facultyPages/rhode-cary/backpropagation.htm