Bachelor Thesis - Amos

download Bachelor Thesis - Amos

of 35

Transcript of Bachelor Thesis - Amos

  • 7/28/2019 Bachelor Thesis - Amos

    1/35

    Prof. Dr. Fumiya Iida

    Bachelor-Thesis

    Supervised by: Author:

    Prof. Dr. Fumiya Iida Amos ZweigDr. Alejandro ArietaKeith Gunura

    Controling a crane arm with

    EMG sensors

    Spring Term 2011

  • 7/28/2019 Bachelor Thesis - Amos

    2/35

  • 7/28/2019 Bachelor Thesis - Amos

    3/35

    Contents

    Abstract ii

    Symbols iii

    1 Introduction 1

    2 EMG Sensors 2

    2.1 Functionality of Electromyography . . . . . . . . . . . . . . . . . . . 22.2 The layout of an EMG sensor . . . . . . . . . . . . . . . . . . . . . . 22.3 Adapting the sensors to the Arduino . . . . . . . . . . . . . . . . . . 3

    3 Pattern Recognition 5

    3.1 Pattern Recognition with a Neural Network . . . . . . . . . . . . . . 53.2 Activity Check Pattern Recognition . . . . . . . . . . . . . . . . . . 6

    4 Sensor Placement 7

    5 Experiments 85.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    6 Results 11

    6.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    7 The Robot 14

    7.1 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167.3 Experiments with the Robot . . . . . . . . . . . . . . . . . . . . . . 17

    8 Conclusions and Future Work 19

    Bibliography 21

    A Description of the Neural Network 22

    B Code 25

    B.1 Matlab Code for Neural Network . . . . . . . . . . . . . . . . . . . . 25B.2 Arduino Code for Activity Check . . . . . . . . . . . . . . . . . . . . 26

    C Sketches 29

    C.1 Sketches of the parts of the robot . . . . . . . . . . . . . . . . . . . . 29

    i

  • 7/28/2019 Bachelor Thesis - Amos

    4/35

    Abstract

    This thesis describes the control of a four degrees fo freedom robot arm throughnerve signals. The robot can successfully pick a spoon out of a cup and place itinto another cup. The nerve signals are measured from the forearm of the subjectwhenever it moves its hand. Three EMG sensors are used to measure the nerve

    signals, an activity check algorithm or a neural network are used to process them.As measurements show, the activity check is superior to the neural network.

    ii

  • 7/28/2019 Bachelor Thesis - Amos

    5/35

    Symbols

    Symbols

    i val input values

    h val hidden values

    o val output values

    hw weights of the connections to the hidden values

    ow weights of the connections to the output values

    h sum hidden sum = weighted sum of all the input values

    o sum output sum = weighted sum of all the hidden values

    Acronyms and Abbreviations

    DOF Degrees of freedom

    FFT Fast fourier transformationEMG Electromyography

    NN Neural Network

    iii

  • 7/28/2019 Bachelor Thesis - Amos

    6/35

    Chapter 1

    Introduction

    Controlling a robot through nerve signals is a fascinating idea. It reminds of thedream of leaving the own body and being inside a different one. It would allow toinfluence a far away environment through the body of a robot. Of course with acomputer this is allready possible today, but not with the feeling of actually beinginside the robot.Now imagine using your nerve signals to control a robot attached to your body.The often mentioned wish for a third arm could all of a sudden become reality. Orthe wish to have a second arm again... If it was possible to attach a robot hand tothe forearm of a hand amputee and let him control it like his own hand, the robotcould replace his lost hand, thus greatly improving his quality of life.To improve the understanding of nerve signals and how they can be used to controla robot, the goal of this thesis was to build a robot gripper that can be controlledthrough nerve signals. Picking up the thought of a hand prosthesis, it was decided

    to use nerve signals gained from the forearm of the subject.

    1

  • 7/28/2019 Bachelor Thesis - Amos

    7/35

    Chapter 2

    EMG Sensors

    2.1 Functionality of Electromyography

    Electromyography, EMG, is a method of recording nerve signals that are sent fromthe brain to the muscles of our body. Every time a muscle is contracted, the EMGsensor measures a nerve signal. There are two types of EMG sensors, surface EMGsensors and implantable ones. For the sake of simplicity only surface EMG sensorswere used in this thesis. An EMG sensor consists of two electrodes, which are placedon a muscle, in orientation of its fibers. The sensor measures the voltage betweenthese two electrodes which is caused by nerve impulses. Even though a voltagedifference is measured, nerve impulses are not delivered electrically, as a commonmisconception causes to believe. They are caused through diffusion of N a+, K+

    and Cl ions. In the relaxed state, the interior of a nerve tract, also known as axon,

    holds most negative ions while the positive ions are outside of the membran. Thepotential inside a nerve cell in its relaxed state is 70mV. When a signal arrivesthrough the axon, the local potential inside the cell starts to rise. As soon as itreaches 55mV, all positive ions from outside are forced inside the cell through ioncarriers while the negative ions are forced out of the cell. The local potential insidethe cell reaches a peak of +30mV. After that the cell returns to its relaxed state.The signal is passed on through the axon because some of the positive ions insidethe cell difuse along the axon, causing the potential next to the location that justreacted to rise to 55mV. There the process repeats itself.At the very moment an EMG sensor measures a voltage difference, a signal is trav-eling along a nerve cell beneath the two electrodes. The force of contraction of amuscle is proportional to the number of muscle cells that are contracted simultane-ously. Every muscle cell has its own nerve cell that controls it. If more nerve cells

    are sending signals simultaneously, the muscle contracs harder. The sensor sums upall the signals passing through all the nerve cells beneath its electrodes. Thereforethe amplitude of the measured signal is proportional to the force of contraction ofthe muscle.

    2.2 The layout of an EMG sensor

    Surface EMG sensors compare the voltage between two electrodes on the skin ofthe subject. An extra electrode is used as a ground reference for all the measuredsignals. The amplitude of the measured voltage difference is around 70mV. Thissignal is first amplified with a differential amplifier, then filtered and then amplifiedagain. A low pass filter avoids aliasing and a high pass filter helps against lowfrequency distortions like the heart beat. An example of the resulting EMG signal

    2

  • 7/28/2019 Bachelor Thesis - Amos

    8/35

    3 2.3. Adapting the sensors to the Arduino

    can be seen in figure 2.1. Figure 2.2 shows two EMG sensors, one with and onewithout isolation.

    Figure 2.1: Typical EMG signal

    2.3 Adapting the sensors to the Arduino

    The EMG sensors had to be adapted to work with the Arduino microcontroller.The Arduino can only read input values between 0V and 5V, whereas the sensorsproduced output values between 5V. The level of amplification of the signal canbe adjusted over a resistor on the circuit board of the sensor. The resistor wasreplaced with a potentiometer ranging from 0 to 5k. By changing the resistanceof the potentiometer the amplitude of the signal was set to 2.5V. Figure 2.2 shows

    the EMG sensor with the potentiometer.In order to change the signal range from [2.5V, 2.5V] to [0V, 5V], an offset voltageof 2.5V was added to each signal. For this task, the circuit in figure 2.3 has beensoldered. The Op-Amps A and B stabilize the 2.5V reference signal and the inputsignal. Op-Amp C adds them together but also inverts the sign. D inverts the signagain from negative to positive.

  • 7/28/2019 Bachelor Thesis - Amos

    9/35

    Chapter 2. EMG Sensors 4

    Figure 2.2: EMG sensor with electrodes, potentiometer and yellow shrink tubeisolation

    Figure 2.3: Schematic of the circuit that adds a 2 .5V offset to the signal

  • 7/28/2019 Bachelor Thesis - Amos

    10/35

    Chapter 3

    Pattern Recognition

    To control a robot with four DOF, eight input patterns are necessary, because eachDOF can rotate forward and backward. Two algorithms were used to distinguishdifferent patterns from the three EMG signals: A neural network and an activitycheck.

    3.1 Pattern Recognition with a Neural Network

    At first glance, an EMG signal looks like white noise. Except for differences inamplitude, nothing can be recognized in time domain. However in frequency domaina typical pattern can be observed. In figure 3.1 the left curve shows a typical EMGsignal and the right curve shows a plot of its FFT. Because the FFT of the EMGsignal has large fluctuations, it is filtered with a moving average filter over the last20 samples. The resulting curve can be seen in figure 3.1 in red.

    Figure 3.1: left: EMG curve, right: FFT of the EMG.

    From this filtered curve samples are taken at the frequencies 20, 40, 60, 80, 100, 120,140, 160, 180, 200, 220, 250, 300, 350, 400 and 450Hz. More samples are chosenbelow 250Hz because for higher frequencies the amplitude of the FFT aproacheszero. The 16 samples of all three EMG signals are combined to a 48 1 inputvector for the NN.

    The NN was designed similar to [1] using backpropagation learning. It has 48 inputnodes, one layer of 53 hidden nodes and 8 output nodes. The exact number ofhidden nodes is not important for the NN performance. Past experiecne has shownthat using 10% more hidden nodes than input nodes works fine. Eight outputnodes were used, because eight patterns have to be recognized. To indicate pattern

    5

  • 7/28/2019 Bachelor Thesis - Amos

    11/35

    Chapter 3. Pattern Recognition 6

    n, node n is set to 1 while all the other nodes are set to 0. For a detailed descriptionof the NN structure and the learning algorithm, see appendix A.The Matlab code that implements the NN learning is shown in appendix B.1. It

    trains all the recorded patterns in a loop, until the maximum of the sum squarederror is smaller than 0.001 or a time mark is reached. A time mark of 4 minuteswas chosen because the major changes in the NN structure happen in the first 2minutes of learning. After 4 minutes almost no further changes occur. Figure 3.2shows the process of a NN learning to differentiate eight patterns. The output(green) aproaches the desired output (black) with every learning iteration. Pictureswere taken after 10, 20, 30 and 150 iterations.

    Figure 3.2: Green: NN Output values after 10, 20, 30, 150 learning iterations;Black: desired values

    3.2 Activity Check Pattern Recognition

    A simpler way to recognize different EMG patterns is to check each muscle indi-vidually if it is active or inactive. If the average EMG amplitude is bigger than alimit value, the observed muscle is active. In that case, the corresponding sensorvariable is set to 1. In case of inactivity it is set to 0. Using n sensors, this algorithmcan recognize 2n patterns. Since the average amplitude is equal to the integrationover the whole signal divided by the length of the signal, the integration can becompared to a limit value just as well. The integration is computed as a summationof all the absolut values of the signal, as in equation 3.1.

    EM Gi =n

    j=1

    |valueij| (3.1)

    To determine the limit value, a constant value of 0.3V is integrated over the wholemeasurement time. Since the noise amplitude coming from an inactive muscle liesaround 0.2V and the EMG signal of an active muscle normally has an amplitude of

    1.5V, this provides a good limit value.

  • 7/28/2019 Bachelor Thesis - Amos

    12/35

    Chapter 4

    Sensor Placement

    In the introduction, the placement of the sensors on the forearm is mentioned. Asecond sensor placemect was tested as well, placing the sensors on the the neckand the jaw of the subject. In the main test series, the sensors were placed on theforearm, the first one on the finger flexor muscle group, the second one on the fingerextensor muscle group and the third one on the small thumb flexor (musculus flexorpollicis brevis). In the second test series, sensor one was placed on the left headturner muscle (Musculus sternocleidomastoideus sinister), sensor two on the righthead turner muscle (Musculus sternocleidomastoideus dexter) and sensor three onthe chewing muscle (Musculus masseter). The locations of the sensors in the twodifferent setups are shown in figure 4.1. The sources of the pictures are [2] and [3].

    Figure 4.1: Top: Sensor placement on the forearm, Bottom: Sensor placement onthe neck and jaw.

    7

  • 7/28/2019 Bachelor Thesis - Amos

    13/35

    Chapter 5

    Experiments

    Two differen pattern recognition algorithms and two different sensor placementswere used in this thesis. The following experiments compare these four setups. Theresults can be seen in the next chapter.

    5.1 Simulation

    Before measuring the successrate of these four setups, the recognition capabilities ofthe NN have been simulated. The Activity Check algorithm cannot be simulated.It is specifically designed for the EMG signals and cannot be tested with randomlygenerated patterns. 10, 20, 30, up to 100 patterns were used to test the NN. Eachpattern was generated as a 481 vector of random numbers between 0 and 1. This

    results in signals with an amplitude of 0.5 and a mean value of 0.5.To create different samples of the same pattern, the original sample was overlayedwith an artificial measurement noise. Noise to signal ratios of 0.6, 1.0 and 1.4were examined. The noise was generated as a vector of random values betweennoise amp. For each pattern the NN had to learn, there were three trainingsamples and seven testing samples. A pattern is considered recognizable if at least6 out of 7 testing samples were recognized correctly. The training samples werefed to the NN in a loop, as shown in figure 5.1. First the first sample of the firstpattern is fed in, then the first sample of the second pattern and so on up to thefirst sample of the nth pattern. Then the same loop is repeated with the second setof training samples and then again with the third set.

    Figure 5.1: Schematic of the feeding loop for the NN

    8

  • 7/28/2019 Bachelor Thesis - Amos

    14/35

  • 7/28/2019 Bachelor Thesis - Amos

    15/35

    Chapter 5. Experiments 10

    Figure 5.3: Pattern recognition measurements form the forearm or the neck

  • 7/28/2019 Bachelor Thesis - Amos

    16/35

    Chapter 6

    Results

    6.1 SimulationThe NN can maximally differentiate 50 patterns with the smallest noise to signalratio. Increasing the measurement noise decreases the amount of patterns the NNcan learn to recognize as well as the chance of a sample being recognized correctly.If the NN is only trained with few patterns, the relative performance is high butthe absolute performance is low. For every noise amp there is a maximal absoluteperformance, which stays constant over an interval of 20 to 40 patterns. Onceit is reached, a further increase of the number of patterns does not change theabsolute performance of the NN. Yet the relative performance decreases already. Ifthe number of patterns is increased beyond this interval, both the relative and theabsolute performance decrease.For a noise to signal ratio of 1.4, the NN can rarely recognize a pattern 6 or 7times out of 7 tests. However, up to 50 test patterns, most patterns are recognized3, 4 or 5 times, which indicates that the NN can still learn different patterns, butthe fluctuations of one testing sample to the other are too big to allow a reliableclassification. A better performance is expected if the noise to signal ratio of thetesting samples would be decreased to 1.0 or even 0.6. Figures 6.1 and 6.2 show theresults of the simulation.

    11

  • 7/28/2019 Bachelor Thesis - Amos

    17/35

    Chapter 6. Results 12

    Figure 6.1: Simulation results of absolute NN performance

    Figure 6.2: Measurement results of relative NN performance

  • 7/28/2019 Bachelor Thesis - Amos

    18/35

    13 6.2. Measurements

    6.2 Measurements

    As the measurements show, the Activity Check algorithm performed best in recog-

    nizing eight muscle activity patterns. If this algorithm fails to recognize a patternreliably, it is because the subject cannot control its muscle tension well enough whenperforming the movement. The more the subject practices to control its muscle ten-sion, the better these eight patterns can be distinguished. This could be observedduring the measurements as well as during the experiments with the robot.Against the expectations, using one training sample for the NN worked better thanusing three. The explanation for this observation is, that the fluctuations betweentwo samples of a pattern can still have a similar order of magnitude as the differencesbetween two patterns. Therefore it is better to use only one typical learning sampleper pattern. The fluctuations will prevent some samples from being recognizedcorrectly, but at least the learning samples are clearly distinguishable.When ever the NN could not recognize a pattern at all, it did not learn to do so

    during the learning session. If it could recognize a pattern only a few times, itmanaged lo learn during the learning session, but the fluctations of the samples ofthis pattern were bigger than the difference to another pattern. Therefore some-times the one and sometimes the other pattern is recognized, resulting in a pooridentification perfrmance.

  • 7/28/2019 Bachelor Thesis - Amos

    19/35

    Chapter 7

    The Robot

    7.1 CharacteristicsThe following table shows the characteristics of the robot. The figure 7.2 containsa photograph of the robot with all four DOF sketched in. Sketches of the parts canbe seen in the appendix C.

    Figure 7.1: Table showing the characteristics of the robot

    14

  • 7/28/2019 Bachelor Thesis - Amos

    20/35

    15 7.1. Characteristics

    Figure 7.2: Photograph of the Robot with the four DOF sketched in

  • 7/28/2019 Bachelor Thesis - Amos

    21/35

    Chapter 7. The Robot 16

    7.2 Controller

    The control of the robot is feedforward only. The visual feedback of the user ensures

    a position control with zero steady state error. On the Arduino microcontroller theActivity Check algorithm is implemented. The code for the Arduino is written inC and can be seen in the appendix B.2.The code uses the switch command to ask which pattern is recognized at themoment. To use this command, the EMG patterns have to be converted to integers.The three sensors produce values 0 or 1. These can be interpreted as binary numbersfrom 000 to 111, which are then converted to decimal numbers from 0 to 7. A servoaction is assigned to every one of these eight cases. The table in figure 7.3 showsthe correlation between the users action, the EMG pattern and the robots action.The pattern where all muscles are inactive has to be assigned to not moving anyservo, else the robot can never stand still. This leaves only seven patterns to controleight servo movements. This problem is solved through designing the gripper as an

    open/ close switch. The same signal opens the gripper if it is closed and closes thegripper if it is opened. The remaining six patterns are used to control the otherthree DOF. Since they have to be controllable over their full range of motion, eachof them needs one pattern to turn to the left and one to turn to the right.

    Figure 7.3: Correlation between the users acrion, the EMG pattern and the robotsaction

  • 7/28/2019 Bachelor Thesis - Amos

    22/35

    17 7.3. Experiments with the Robot

    7.3 Experiments with the Robot

    The author was the main test subject, other subjects were merely used to analyze if

    the robot works with other users aswell. The main test used the sensor placementon the forearm. The subject had to use the robot to pick a tea spoon out of a cupand place it into another cup. The actions this task requires are listed below.

    Figure 7.4: Actions the Experiment includes

    In figure 7.5, the control signal and the corresponding changes of the servo anglesare shown.Further experiments were conducted with differnt muscle groups of the same subjectand with different subjects. The setup with the neck and jaw was tested, as wellas two other setups using the chest (Musculus pectoralis major) and the waist(Musculus rectus abdominis) or the legs (Musculus quadriceps femoris) and thecalves (Musculus gastrocnemius). In all these tests, including the ones with different

    subjects, the subject could make the robot move, but was not capable of completinga task. From this it is concluded, that all the tested muscles generate the same kindof signals. However the amplitude of the signal is individually different for eachsubject and for each muscle. To allow a precise control of the robot, the limitvalues for the Activity Check would have to be adapted to the specific setup. Alsothe subject would have to train with each setup to improve its coordination. Elseit will not be able to control the robot.

  • 7/28/2019 Bachelor Thesis - Amos

    23/35

    Chapter 7. The Robot 18

    Figure 7.5: EMG signal and corresponding servo angles

  • 7/28/2019 Bachelor Thesis - Amos

    24/35

    Chapter 8

    Conclusions and Future

    Work

    In the simulation of the NN pattern recognition capacities, the NN could maximallyrecognize 50 input patterns. Depending on the noise amp, the maximum absoluteperformance lies between 30 and 70 tested patterns. Using too few patterns doesnot fully exploit the capacities of the NN, using too many confuses the NN andcorrupts its performance. The highest relative performance has been observed for10 to 20 patterns. It decreases with a growing amount of tested patterns.

    The noise has a big influence on the NN performance. Increasing its amplitudedecreases the amount of patterns the NN can learn to recognize as well as thechance of a sample being recognized correctly. For a noise to signal ratio of 1.4, theNN could hardly recognize any patterns 6 times out of 7. However it could recognizemany patterns 3, 4 or 5 times. This indicates, that the NN could still learn thepatterns during the learning session but the noise was too big to allow a reliableclassification during the tests. Future experiments could analyze if decreasing thenoise to signal ratio of the testing samples to 1.0 or even 0.6 would improve the NNperformance.

    The Activity Check could best recognize eight patterns that have unique combi-nations of tensed and relaxed muscles. A NN with one or three training sampleswas used as well to recognize these patterns. The training with one sample wasmore successful, because the fluctuations of the samples can still have the sameorder of magnitude as the differences between the patterns.

    The attempt to measure as many muscle activity patterns as possible for a spe-cific sensor placement was not successful. The NN could not distinguish most ofthe patterns because two or three were always too similar. To recognize more pat-terns, they would have to be made more individual. Future tests could use moresensors to get individual signals from different muscles of a muscle group. Anotherapproach could also include more than one limit value for the Activity Check todistinguish between different levels of activity.

    While conducting the experiments it was observed that the placement of the elec-trodes has a huge influence on the quality of the measured signal. Placing the sensortwo centimeters away from its intended position can lead to totaly different signalamplitudes or even to a loss of the signal.

    19

  • 7/28/2019 Bachelor Thesis - Amos

    25/35

    Chapter 8. Conclusions and Future Work 20

    The robot could successfully be controlled using the Activity Check algorithm andthe sensor placement on the forearm. Further experiments were conducted withdiffernt sensor placements and with different subjects. In all these tests the subject

    could make the robot move but never succeeded in controlling the robot well enoughto perform a simple task. The limit values of the Activity Check would have to beadapted and the subject would have to practice with the specific setup.It was observed that controlling the robot is a learning process. The longer thesubject tried to perform the predefined task, the better it learned to generate theinput signals to smoothly control the robot. This learning process improves thesubjects coordination of the observed muscles as well as its control over the tensionin each muscle while performing the movements. It is concluded, that every subjectand every different sensor placement can be used to control the robot if the subjectpractices long enough to learn these abilities for the specific setup.

  • 7/28/2019 Bachelor Thesis - Amos

    26/35

    Bibliography

    Background research

    [1] Author Not Found: Chapter 3 Supervised learning: Multilayer Networks I.

    http://www.google.ch/url?sa=t&source=web&cd=1&ved=0CCAQF

    jAA&url=http%3A%2F%2Fwww.cs.umbc.edu%2F~ypeng%2FF04NN%2F

    lecture-notes%2FNN-Ch3.ppt&rct=j&q=Chapter%203%20Supervi

    sed%20learning%3A%20Multilayer%20Networks%20I&ei=bu3YTYn

    SBc-j-gasr-WfDw&usg=AFQjCNHUz_VkHQQpgaCv9iSrCbi0EadGOA&c

    ad=rja

    Figures

    [2] Figure of the head muscles:

    http://www.edoctoronline.com/medical-atlas.asp?c=4&id=21

    651&m=1&p=10&cid=1051&s=

    [3] Figures of the hand muscles:

    http://commons.wikimedia.org/wiki/File:Forearm_muscles_

    front_deep.png?uselang=de

    http://commons.wikimedia.org/wiki/File:Forearm_muscles_

    back_deep.png?uselang=de

    21

  • 7/28/2019 Bachelor Thesis - Amos

    27/35

    Appendix A

    Description of the Neural

    Network

    A NN consists of many of nodes and connections between these nodes. The nodesare organized in layers. A schematic of the NN used in this thesis can be seen infigure A.1. It is designed similar to the NN described in [ 1].

    Figure A.1: Schematic of the neural network

    The first layer is the input layer, followed by the hidden layers. The last layer is theoutput layer. Generaly a NN can have many hidden layers but this NN only hasone. Every node of the NN has a connection to every node one layer before and onelayer after itself. Every connection has a weight. The weights of the connections tothe hidden layer are called hidden weights, hw, and the weights of the connectionsto the output nodes output weights, ow. The first index of a weight describes thenode in the target layer, the second one the node in the origin layer. The value of anode is a function of the weighted sum of all the values of the nodes one layer beforethe observed node. Each value is weighted with the weight of its connection to theexamined node. The weighted sum of the input values is called h sum because thehidden values, h val, are a function of h sum. Accordingly the weighted sum of thehidden values is called o sum and the output values, o val, are a function of o sum.

    22

  • 7/28/2019 Bachelor Thesis - Amos

    28/35

    23

    The input values are called i val. Equations A.1 shows how the weighted sums arecalculated.

    h sumi =n

    j=0

    (hwij i valj ) o sumi =n

    j=0

    (owij h valj) (A.1)

    The sigmoid function is used to calculate the value of a node from its weighted sum.Equation A.2 shows the sigmoid function as a function of x.

    f(x) =1

    1 + ex(A.2)

    The sigmoid function is shown in figure A.2. It has a slope of 0.25 at the point (0, 0.5)and asymptotically aproaches 0 if the argument goes to . If the argument goesto , the function aproaches 1. The sigmoid function is a common choice for a NN

    Figure A.2: Sigmoid function

    node function. It constrains the values of the nodes to a finite interval. At the sametime it is still sensitive to changes of the function variable around x = 0, allowingchanges of the input values to introduce changes of the output values.A NN creates an output vector for each input vector. The weights of the connec-tions save the information how these two vectors correspond. A NN can learn torecognize patterns at its input and indicate them through its output. There arevarious algorithms for NN learning, in this thesis backpropagation learning wasused. Backpropagation is a supervised learning algorithm. The system knows thedesired output values and compares them to the the actual output values. Thesum of the squares of all these output errors is an indicator how closely the NNoutput resembles the desired output. After every iteration, each weght of the NNis adapted proportionally to the partial derivative of the sum squared error with

    respect to the weight itself, see equation A.3. This teaches the NN to match thecurrent input to the current desired output.

    hwij new = hwij E

    hwij(A.3)

    The parameter is the learning step size. Experiments have shown that setting to 0.5 works fine with the used NN. If the learning step size is too big, the NNlearning overshoots and can become unstable. Setting too small leads to a slowlearning rate. Of course if 0, the NN does not adapt at all. If a NN hasto recognize several patterns, it is important to train all these in a loop. If onlyone pattern at a time is trained, the NN forgets the ones it learned before. Thishappens because the NN adapts its weights after every learning iteration. All theweights are adapted to match the current input vector to the current desired outputvector. This decreases the ability of the NN to match the previous input vector to

  • 7/28/2019 Bachelor Thesis - Amos

    29/35

    Appendix A. Description of the Neural Network 24

    its corresponding desired output. If this process is repeated too often, the NN canno longer match the previous patterns to their corresponding outputs at all.

  • 7/28/2019 Bachelor Thesis - Amos

    30/35

    Appendix B

    Code

    B.1 Matlab Code for Neural Network

    % NN_2_learning_EMG_signals

    global num_patterns sensor_placement

    name = [EMG_recording_ sensor_placement .mat];

    load (name, I_VAL);

    m=48; % number of inputs

    n=48+5; % number of hidden values

    p=num_patterns; % number of outputs

    lambda=.5; % size of the learning step

    number_of_patterns=size(I_VAL,2);

    I_VAL_save=I_VAL;

    % I_VAL=I_VAL(:,1:number_of_patterns/3);

    % % only train with one sample per pattern

    % number_of_patterns=size(I_VAL,2);

    % I_VAL(:,1)=I_VAL_save(:,1+number_of_patterns)

    sum_sq_error=ones(number_of_patterns,1);

    % I_VAL((1:p),1)=[1 0 0 0 0 0 0 0]

    % weights to hidden layer from input, n rows, m columns, start range -1,1

    hw=2*rand(n,m)-ones(n,m);% weights ot output from hidden layer, p rows, n columns, start range -1,1

    ow=2*rand(p,n)-ones(p,n);

    tic

    while max(sum_sq_error)>.001

    for j=1:3

    for i=1:number_of_patterns

    i_val=I_VAL((p+1:end),i);

    o_val_true=I_VAL((1:p),i);

    h_sum=hw*i_val; % weighted sum of all input valuesh_val=1./(1+exp(-h_sum)); % sigmoid function

    25

  • 7/28/2019 Bachelor Thesis - Amos

    31/35

    Appendix B. Code 26

    o_sum=ow*h_val;

    o_val=1./(1+exp(-o_sum));

    % modify weights owderrivative_o_val=o_val.*(1-o_val);

    o_error=(o_val_true-o_val); % p x 1 vector

    delta_ow=2*lambda* ( o_error.*derrivative_o_val ) *h_val;

    ow_new=ow+delta_ow;

    % modify weights hw

    derrivative_h_val=h_val.*(1-h_val);

    delta_hw = 2*lambda* ( ow*( o_error.*derrivative_o_val ) .*...

    derrivative_h_val*i_val );

    hw_new=hw+delta_hw;

    sum_sq_error(i)=o_error*o_error;

    ow=ow_new;

    hw=hw_new;

    figure(1)

    hold on

    plot(o_val,g)

    plot(o_val_true,black)

    end % for

    end % for

    max_sum_sq_error=max(sum_sq_error)

    % pause(.5)

    if toc > 60*5

    break % break after 4 min

    endif max_sum_sq_error>.001

    clf(1,reset) % clear figure 1

    end

    end % while

    I_VAL= I_VAL_save;

    name = [EMG_recording_ sensor_placement .mat];

    save (name, I_VAL, hw, ow)

    % save EMG_learning_neck_NN.mat hw ow;

    theEEEEEnd

    B.2 Arduino Code for Activity Check

    // Activity_Check_controller

    #include

    Servo shoulder_rot; // 90 middle

    Servo shoulder_flex; // 0=vertical up

    Servo ellbow; // 0=straight outwards

    Servo grip; // 180=closed, 90=open

    int samples=200;float limit=round((.3)*samples); float EMG[3];

  • 7/28/2019 Bachelor Thesis - Amos

    32/35

  • 7/28/2019 Bachelor Thesis - Amos

    33/35

    Appendix B. Code 28

    if (pos_flex>=90){break;} else{pos_flex=pos_flex+angle/3; break;}

    case 6:

    if (pos_flex

  • 7/28/2019 Bachelor Thesis - Amos

    34/35

  • 7/28/2019 Bachelor Thesis - Amos

    35/35

    Appendix C. Sketches 30

    Figure C.2: Parts of the robot 2