ISSN: International Journal of Science, Engineering...

9
ISSN: 2278 7798 International Journal of Science, Engineering and Technology Research (IJSETR) Volume 5, Issue 6, June 2016 1882 All Rights Reserved © 2016 IJSETR A New Step in Brain Computer Interaction towards Emotion Recognition and Prediction Gayathri. P IFET College of Engineering, India Abstract - From the time it was invented, computers have seen enormous changes and rapid development. But the ability to understand human users is still a complex area that needs to develop. For this to happen, the emotional state of the user has to be recognized. Hence for classifying emotions, in this work the brain signals acquired through electroencephalography has been put to use. The obtained signal is then preprocessed before relevant features can be extracted. The system is trained using a supervised classifier called as the fuzzy classifier and with the help of the training given to it, the system classifies emotions based on two well-known dimensions of emotions known as valence and arousal. In this work, in addition to recognizing emotions, we also predict the directionality which the emotion may take in future based on the current emotion. This is done by adding a new dimension called dominance. Emotion recognition when paired with the prediction of future emotional states will bring in many novel benefits. Keywords - Emotion, Electroencephalography, Valence, Arousal I. INTRODUCTION The word emotion means „to stir up‟ and it has been observed that it dates long back to 1579. Emotion by definition is an intense affective state that is characterized by a sudden physical or mental disturbance which affects the ability to react to situations. It can also be defined as a complex process that is comprised of numerous components such as feelings, thoughts, behavior and cognitive reactions. It should be noted that it differs from two of its nearest neighbors, mood and affect. While emotions are very short lived, mood tends to hang on for a longer time and affect is the expression or experience of having an emotion. From the year of discovery, there has been no shortage of research that has taken place regarding the topic emotion. It has found out a very firm place for itself in the field of engineering because of its scope in science beyond psychology. The need of emotion detection has grown with the increasing number of brain computer interface applications. Emotion detection could be done from eye gaze, text (handwriting), speech, facial expression, gestures, physiological signals and a combination of any of these. The ways in which emotions can be recognized is discussed in the related works section of this paper. But before we get into all those, let us have a brief introduction about the basic concepts and terms such as the human brain, brain computer interaction and electroencephalography which are related to this work. A. The Human Brain As emotions are rightly called as „clues from the brain‟, it would be necessary for us to take a brief look at the structure of the human brain and its functions. The Brain controls the mental processes and physical actions of a human being and has four principal sections such as Cerebrum, Diencephalon, Cerebellum and Brain stem. The cerebrum is the largest and uppermost portion of the brain. It consists of four lobes namely Frontal lobe, Parietal lobe, Occipital lobe and Temporal lobe. The brain is separated into right and left halves by the Inter Hemispheric Fissure. It is said that the limbic system forms the neural basis of emotion. The limbic system is located just beneath the cerebrum as shown in Fig.1 and is responsible for our emotional lives. The amygdala which is a primary structure of the limbic system is known as the emotion center of the brain and it is involved in evaluating the emotional valence of situations. Fig.1.The Limbic System B. Brain Computer Interaction Brain-Computer Interaction (BCI) which is actually a branch of Human-Computer Interaction (HCI), creates a direct communication channel between the brain and a computer and its history started with Hans Berger's discovery of the electrical activity of the human brain. There are several types of brain computer interfaces depending on the positioning of the sensors such as Invasive, Partially Invasive and Non Invasive Brain Computer Interfaces. Invasive devices are implanted directly into the brain whereas partially invasive devices are implanted inside the skull but outside the brain. Non

Transcript of ISSN: International Journal of Science, Engineering...

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1882

All Rights Reserved © 2016 IJSETR

A New Step in Brain Computer Interaction

towards Emotion Recognition and Prediction

Gayathri. P

IFET College of Engineering, India

Abstract - From the time it was invented, computers have

seen enormous changes and rapid development. But the ability

to understand human users is still a complex area that needs to

develop. For this to happen, the emotional state of the user has

to be recognized. Hence for classifying emotions, in this work

the brain signals acquired through electroencephalography has

been put to use. The obtained signal is then preprocessed

before relevant features can be extracted. The system is trained

using a supervised classifier called as the fuzzy classifier and

with the help of the training given to it, the system classifies

emotions based on two well-known dimensions of emotions

known as valence and arousal. In this work, in addition to

recognizing emotions, we also predict the directionality which

the emotion may take in future based on the current emotion.

This is done by adding a new dimension called dominance.

Emotion recognition when paired with the prediction of future

emotional states will bring in many novel benefits.

Keywords - Emotion, Electroencephalography, Valence,

Arousal

I. INTRODUCTION

The word emotion means „to stir up‟ and it has been

observed that it dates long back to 1579. Emotion by definition is an intense affective state that is characterized by a sudden physical or mental disturbance which affects the ability to react to situations. It can also be defined as a complex process that is comprised of numerous components such as feelings, thoughts, behavior and cognitive reactions.

It should be noted that it differs from two of its nearest neighbors, mood and affect. While emotions are very short lived, mood tends to hang on for a longer time and affect is the expression or experience of having an emotion.

From the year of discovery, there has been no shortage of research that has taken place regarding the topic emotion. It has found out a very firm place for itself in the field of engineering because of its scope in science beyond psychology. The need of emotion detection has grown with the increasing number of brain computer interface applications.

Emotion detection could be done from eye gaze, text (handwriting), speech, facial expression, gestures, physiological signals and a combination of any of these. The ways in which emotions can be recognized is discussed in the related works section of this paper.

But before we get into all those, let us have a brief introduction about the basic concepts and terms such as the human brain, brain computer interaction and electroencephalography which are related to this work.

A. The Human Brain

As emotions are rightly called as „clues from the brain‟, it would be necessary for us to take a brief look at the structure of the human brain and its functions. The Brain controls the mental processes and physical actions of a human being and has four principal sections such as Cerebrum, Diencephalon, Cerebellum and Brain stem. The cerebrum is the largest and uppermost portion of the brain. It consists of four lobes namely Frontal lobe, Parietal lobe, Occipital lobe and Temporal lobe. The brain is separated into right and left halves by the Inter Hemispheric Fissure.

It is said that the limbic system forms the neural basis of emotion. The limbic system is located just beneath the cerebrum as shown in Fig.1 and is responsible for our emotional lives. The amygdala which is a primary structure of the limbic system is known as the emotion center of the brain and it is involved in evaluating the emotional valence of situations.

Fig.1.The Limbic System

B. Brain Computer Interaction

Brain-Computer Interaction (BCI) which is actually a branch of Human-Computer Interaction (HCI), creates a direct communication channel between the brain and a computer and its history started with Hans Berger's discovery of the electrical activity of the human brain.

There are several types of brain computer interfaces depending on the positioning of the sensors such as Invasive, Partially Invasive and Non Invasive Brain Computer Interfaces. Invasive devices are implanted directly into the brain whereas partially invasive devices are implanted inside the skull but outside the brain. Non

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1883

All Rights Reserved © 2016 IJSETR

invasive devices are placed outside the head and are considered to be the safest.

C. Electroencephalography

Electroencephalography (EEG) is the recording of electrical activity in the brain. It measures voltage fluctuations within the neurons of the brain. Neurons which form the building blocks of the brain communicate with each other by electrical changes. These electrical changes are seen in the form of brain waves. Brain waves are measured in cycles per second (Hz).

The EEG signal can be classified into the following types based on their frequency as shown in Fig.2. Delta waves (0.5 to 3 Hz) are the slowest and are generated in meditation and dreamless sleep. Theta waves (3 to 8 Hz) occur most often in sleep. Alpha waves (8 to 12 Hz) are present during quietly flowing thoughts. Beta waves (12 to 38 Hz) dominate our normal waking state of consciousness. Gamma waves (38 to 42 Hz) are the fastest of brain waves and relate to simultaneous processing of information from different brain areas.

Fig.2. EEG waveforms

II. MODELS OF EMOTION

Emotions can be classified into primary and secondary.

While the primary emotions are anger, disgust, fear, happiness, sadness and surprise, a combination of these would form the secondary emotions.

The classification of emotions has been researched from

two perspectives. The first is that emotions are discrete and there is no continuity between them while the second view asserts that emotions can be characterized on a dimensional basis[10]. Based on these two approaches, there are several models of emotions that have been proposed by various researchers. Some of them are given below.

A. Circumplex Model

This two dimensional model was proposed by James Russell. The two dimensions are valence and arousal. Valence parameter indicates whether the emotion is a

positive one or negative one and arousal parameter gives an indication of the intensity of the emotion experienced. This is the most widely used model.

B. PANA Model

The Positive Activation – Negative Activation (PANA) model introduces the idea that positive and negative emotions are two separate systems. The vertical axis is given by low to high positive affect and the horizontal axis by low to high negative affect.

C. Plutchik's Model

This is a three dimensional model offered by Robert Plutchik, which states that motions are arranged in layers of concentric circles where the inner circles represent basic emotions and the outer circles, the more complex emotions.

D. PAD Model

Albert Mehrabian and James Russell together suggested the PAD model of emotion where PAD stands for Pleasure, Arousal and Dominance. Pleasure determines whether the emotion is positive or negative, arousal determines the intensity of the emotion as to it is high or low and dominance determines the nature of the emotion whether it is dominant or submissive.

E. Vector Model

This model is also two dimensional but it is represented in a boomerang shape. The model is based on the assumption that there is an arousal dimension that persists always and it is only the valence dimension that decides whether the emotion is positive or negative.

F. Lovheim Cube of Emotion

This model is different from the others because it relates directly to the amalgamation of neurotransmitters such as dopamine, nor-adrenaline and serotonin. It is a three dimensional model where the hormones above form the three axes and the emotions are placed in the corners of the cube formed by the three axes.

III. RELATED WORKS

While here EEG is used for classification of emotions,

previously many techniques have been employed for the same purpose. A graphical representation of each of their accuracies in recognizing emotions is given in Fig.3 and they are discussed below.

A. Facial Expressions

Recognizing emotions from facial expressions is perhaps the easiest and earliest known technique. It has a long history and the idea behind this approach is that emotions can be found out from the spatial positioning of regions of the face such as lips, eyelids, nostrils, cheeks, eyebrows etc. But the problem is, not always facial expressions tend to be true as people can act in certain situations.

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1884

All Rights Reserved © 2016 IJSETR

B. Speech

Emotion can also be recognized from a person‟s speech as well. It is because the voice in speech not only conveys a semantic message but also the information about the emotional state of the speaker. The parameters of speech are rate, range and average of the pitch, intensity, duration of speech and quality[6]. Other parameters include the contour, base of tone, and spectral properties.

C. Audiovisual Signals

The majority of works in recognition of human emotions in the past few years focus on either speech or facial expression alone. However, some of the emotions are audio dominant, while the others are visually dominant. When one modality fails or is not good enough to determine a certain emotion, the other modality can help to improve the recognition performance. The integration of audio and visual data will convey more information about the emotional state. But the disadvantage is that the process becomes complex when dealing with two inputs[14].

D. Electrocardiogram (ECG)

The problem with all the above three techniques is that, all of them can easily be masked as it is in the hands of humans to control them. But physiological signals are involuntary in nature and cannot be controlled. Hence such signals are suitable for truly recognizing emotions. ECG records the activity rate of the heart and is composed of three waves namely P wave, QRS complex and T wave. It can be associated with emotion on the basis of heart related indexes such as heart rate, blood volume pulse, stroke volume, cardiac output, peripheral vascular resistance and myocardial contractility[1].

E. Electromyogram (EMG)

EMG measures the muscular activity of the brain and it corresponds to negatively valenced emotions. The arousal parameter is given by another signal that is associated with EMG. It is the Galvanic Skin Response(GSR) which is an indicator of skin conductance and it increases linearly with arousal rate. The results of both these signals are then combined to find out the emotion based on the valence arousal model[11].

Fig.3. Comparison of various techniques used for emotion

recognition.

IV. PROPOSED DESIGN

The proposed system aims not only to recognize and

classify human emotions, but also to predict the dominant nature of the emotion so that it can be decided whether the emotion will be withdrawn or approached. The proposed system makes use of a three dimensional model which incorporates dominance values in addition to the existing valence and arousal levels.

The valence axis determines whether the emotion is positive or negative in type. It can be computed by the rate of difference between the left and right hemispherical activities of the brain, since positive and negative emotions are controlled by the right and left hemispheres of the brain[4], [7]. Therefore, the difference between them is a good measure for finding the value of valence level as given by (1).

Valence = (aF4/bF4) – (aF3/bF3) (1)

F4 is a location in the right side of the brain and F3 is a location in the left side (refer Fig.5). Specifically, F3 and F4 have been chosen among the 16 locations because, they are in the prominent area of the forehead and EEG will be best available here. F stands for the frontal lobe of the brain, a and b stand for the alpha and beta waves in those locations.

The arousal axis determines the intensity of the emotion whether it is low or high. It is calculated by the ratio of alpha and beta waves in EEG signal as the beta waves are associated with alertness and alpha waves are related to calmness[4], [7]. So, the ratio between alert and calm states of the brain gives an indication of arousal level as in (2).

Arousal = ( bF3+bF4) / (aF3+aF4 ) (2)

(2) adds up the beta levels of F3 and F4 and similarly the alpha levels of the same. Then it divides the overall beta value to that of alpha value and finds out the intensity level. From this, we can deduce the arousal level. Now, based on the valence and arousal values, we can find out the emotion by using Fig.4.

Fig.4. Valence Arousal Mapping

Having found out the emotion, now we want to know the nature of the emotion whether it is dominant or submissive. That is, if the emotion is dominant, then it will be approached in future. Else, on the other hand, if it is submissive, then the emotion will be withdrawn. For example, if a person is sad and the emotion is dominant,

0

20

40

60

80

100

98 9072 70 67

Rec

og

nit

ion A

ccu

racy

Per

centa

ge

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1885

All Rights Reserved © 2016 IJSETR

then he or she is about to cry in the immediate times to follow. On the contrary, if a person is sad and the emotion is submissive in nature, then the person is expected to become normal subsequently.

But in human terms, the parameter of dominance describes the feelings of control over situations versus feelings of being controlled by external circumstances. It is in principle not possible to derive information from such a dynamic component. Hence, the weight of the emotion is calculated and this can be used as a measure for calculating the dominance parameter.

The weight of the emotion can be measured with the help of activation and saturation thresholds ratio. The activation threshold is the minimum value that is required to activate an emotion and saturation threshold is the maximum value that an emotion can possess. These are global constants and are valid for every emotion category[2],[3].

w = 1 - (d - Δ / Ф - Δ ) (3)

The weight of the emotion is calculated by (3). Ф is the activation threshold and Δ is the saturation threshold. If weight(w) is greater than 0.5, then the emotion will be approached and if weight is lesser than 0.5, then the emotion will be withdrawn. Thus the weight of the emotion can be considered a good candidate for predicting the dominance of the emotion. In this way, we can predict the direction of emotions in future. The block diagram of the proposed system is given in Fig. 5.

Fig.5. Block diagram

The work starts with the measurement of EEG signal and transmitting it to the appropriate software. The signal is then converted to the frequency domain from the time domain for computational purposes by means of FFT. The transformed signal is then partitioned using fitting window

of suitable length. Normalization of EEG is then done using an adaptive filter.

Next comes the feature extraction step where only relevant features are extracted using the bandpass filter. Then for feature selection, PCA (Principal Component Analysis) technique is used to reduce dimensionality. The final step is the training of the system with the input data using a classifier. The system is thus trained and now is ready to classify any unlabeled emotion.

V. IMPLEMENTATION DETAILS

The proposed work can be broken down into the

following units such as acquiring the EEG signal from the user, processing the obtained signal for better accuracy, extracting the needed features from the signal and finally training the system using fuzzy logic methodology for classifying the emotions.

A. Signal Acquisition

There are many ways to capture the signals of the brain other than EEG. They are Magneto Encephalography (MEG), Magnetic Resonance Imaging (MRI), Near Infrared Spectrum Imaging (NIRS) and functional Magnetic Resonance Imaging (fMRI).

But there are several advantages of EEG over others such as EEG hardware is of less cost, it is portable and EEG doesn‟t expose patients to magnetic fields.

The EEG signal which forms the basic input of this work can be acquired using the Neurosky Mindwave Mobile headset which has an EEG channel and a reference channel. This EEG biosensor will record 5 waves namely delta, theta, alpha, beta and gamma.

Neurosky Mindwave Mobile Headset

The MindWave Mobile headset safely measures and

outputs the EEG power spectrums. The device consists of a

headset, an ear-clip, and a sensor arm. The headset‟s

reference and ground electrodes are on the ear clip and the

EEG electrode is on the sensor arm, resting on the forehead

above the eye (FP1 position).

NeuroSky, which was founded in 2004 in California, is a

Silicon Valley-based company and it manufactures Brain-

Computer Interface products for various consumer

applications by adapting EEG technology to fit the

consumer market today.

The success of this headset comes with the use of

inexpensive dry sensors where as other EEG headsets such

as Emotiv Epoc use wet sensors which require the

application of a conductive gel between the sensors and the

head. It is also precisely accurate, portable and the EEG

biosensor collects electrical signals, not actual thoughts to

translate brain activity into action.

It digitizes and amplifies raw analog brain signals to

deliver concise inputs to educational and research

applications. The primary features of the Neurosky headset

are automatic wireless pairing, bluetooth version 2.1 Class 2

with 10 meters range, static Headset ID for pairing purposes and it has extremely low-level signal detection and has a

512Hz sampling rate with a frequency range of 3-100Hz.

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1886

All Rights Reserved © 2016 IJSETR

ThinkGear connector is an interface program through

which the captured brainwaves can be transferred to the

system for further processing to be done. No configuration

or separate installation is required for the thinkGear

connector. It just pairs the headset with the computer.

The brainwaves that are recorded are transmitted through bluetooth transmission technology to the mindwave mobile software. From here the signal is exported to the matlab software whose connection can be established by setting up a path.

Fig.6 NeuroSky Mindwave Mobile Headset

B. Preprocessing

The tasks that come under preprocessing are signal transformation, adaptive filtering and temporal window process. The signal that is in time domain is converted to frequency domain using FFT (Fast Fourier Transform).

The signal that is obtained is continuous. So in order to partition them a fitting window is used. For fitting window, a Gaussian function is used. A Gaussian function is of the form as in (4).

f(x) = a exp( - (x-b)2/2c2 ) (4)

where a = 1/σ√2π is the height of the curve's peak,

b = μ is the position of the center of the peak and c = σ controls the width of the bell.

An act of normalization also has to be performed on the signal in order to remove external signals such as heartbeat, ocular and muscular artifacts.

An adaptive filter is used for normalization because the signal and noise characteristics are often non-stationary and vary with time. Adaptive filter has an adaptation algorithm that varies the filter transfer function accordingly as in (7). There are various adaptation algorithms available of which Least Mean Square (LMS) algorithm is used here.

It consists of two basic processes. One is the filtering process which calculates the output of the filter and the estimation error by comparing the output to the desired signal. The other is the adaptation process which adjusts tap weights based on the estimation error. There are many configurations for using the adaptive filter such as interference cancellation, system identification, inverse modeling of which the first is shown in Fig.7[12].

Fig.7 Adaptive filter configuration

Filter output : y(n) = wT(n) r(n) (5)

Estimation error : e(n) = x(n) − y(n) (6)

Transfer function : w(n+1) = w(n) + μe(n)r(n) (7)

where y(n) is the noise signal, w(n) is the filter coefficient vector, r(n) is the reference signal, x(n) is the acquired EEG signal and μ is the step size parameter. Therefore from (6), e(n) is the pure EEG signal that is wanted.

C. Feature Extraction

All the information that is carried by the EEG signal is not required for recognizing emotions. Hence only those features that are to be used must be extracted which is the alpha and beta band of the EEG signal. For this a bandpass filter is used. The frequency range of alpha band is 8 to 12 Hz and that of beta band is 12 to 38 Hz.

The filter specifications for alpha band are :

Edge of the first stop band = 5 Hz

Starting Edge of the pass band = 8 Hz

Closing edge of the pass band = 12 Hz

Edge of the second stop band = 15 Hz

The filter specifications for beta band are :

Edge of the first stop band = 10 Hz

Starting Edge of the pass band = 12 Hz

Closing edge of the pass band = 38 Hz

Edge of the second stop band = 40 Hz

The specifications of the bandpass filter are shown in Fig.8. But all the alpha and beta bands cannot be used. So only a few among the extracted features are to be selected. This is done using principal component analysis (PCA). PCA is a simple method of extracting prominent information from large datasets. It has four steps.

A. Subtract the mean

B. Calculate the covariance matrix

C. Calculate the eigenvectors and eigenvalues

D. Reduce dimensionality and form feature vector

Form a matrix by filling in all the obtained alpha and beta values separately. Find the average of each of the column. This is the mean alpha and mean beta. Now subtract this from the former matrix. Subtracting the mean makes variance and covariance calculation easier by simplifying their equations. Next find out the covariance matrix and its corresponding eigen values and vectors. The

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1887

All Rights Reserved © 2016 IJSETR

eigenvector with the highest eigen value is the principal component of the data set.

Fig.8. Bandpass filter specifications

D. System Learning and Classification

The system has to be first trained with labeled data so that it can classify unlabeled data. The trained data is used as a basis for classification. There are many learning styles that an algorithm can have. They are Supervised Learning, Unsupervised Learning, Semi Supervised Learning and Reinforcement Learning. The one that is used here is an artificial neural network classifier known as the fuzzy classifier.

Fuzzy classifiers are an application of fuzzy logic theory. Fuzzy Logic is a multi-valued logic that allows intermediate values to be defined between conventional evaluations like true/false, yes/no.

Expert knowledge is used for defining fuzzy rules and it

is expressed in a very natural way using linguistic variables

as shown in (8) and (9). For example, consider two variables

Valence and Arousal. Now the expert knowledge for this

can be formulated as a rule and such rules can be combined in a table called as rule base like Table I for emotion

recognition and like Table II for emotion prediction.

Fuzzy rules for the emotion Happy and Excited have

been defined below.

IF Valence Positive AND Arousal High

THEN Emotion = Happy (8)

Table I. Rule Base for Emotion Recognition

VALENCE AROUSAL EMOTION

Positive Low Calm

Positive High Happy

Negative Low Sad

Negative High Fear

IF Emotion Happy AND Weight Dominant

THEN Prediction = Excited (9)

Table II. Rule Base for Emotion Prediction

EMOTION WEIGHT PREDICTION

Calm Submissive Calm

Calm Dominant Contented

Happy Submissive Happy

Happy Dominant Excited

Sad Submissive Sad

Sad Dominant Depressed

Fear Submissive Fear

Fear Dominant Terrified

Based on the training that is given to the system as in Fig.9, it will classify any unlabeled emotion that has been given to it and predict the dominance as explained.

Fig.9. Training the system

VI. RESULTS AND DISCUSSION

A. Experimental Setup

The experiment was performed on 30 participants in

various environments as the materials needed for carrying

out the experiment did not have any restrictions and were

portable. Initially, the participants were given instructions

regarding the use of the headset and a short brief about the

objectives of the project.

In some cases, an external stimulus in the form of a

video was used to induce emotions. But otherwise, no

stimulus was used and only the real emotions were captured

from the participants.

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1888

All Rights Reserved © 2016 IJSETR

The MindWave mobile headset was positioned and the

headset status was checked. Once connected, then the EEG

signal of the participant was recorded for a duration of ten

minutes.

From the signals that were recorded, the present emotion

of the participants and his/her future emotional states were determined. This was cross checked with the self assessment

report that was collected from each of the participant prior

to the start of the experiment. The report contained the

current emotional status that the user is experiencing and the

next emotion which he/she might possibly get into.

Efforts were taken to ensure that all the cases of the

proposed system were experimented and accurate results

were obtained.

B. Illustrations

This section shows the snapshots that were taken during

the implementation of the proposed design.

This snapshot shows the Neurosky Mindwave Mobile

headset being added to the system by means of bluetooth

technology. If the internal bluetooth available with the

system is not available, then it is advised to use any external bluetooth device such as dongle.

Once the headset has been connected to the system and

paired with the Mindwave mobile software at the backend,

it is positioned properly on the head so that the green

symbol is enabled as seen above.

The figure within the screenshot shows the real time

EEG that has been recorded by the Neurosky Mindwave

Mobile headset from the participant and it has been

imported to Matlab workspace.

This figure shows the alpha waves of the recorded EEG

signal of the participant. This can be obtained by filtering

the alpha waves alone from entire EEG using a bandpass

filter of the appropriate frequency.

These are the beta waves of the corresponding EEG that

has been recorded from the participant. Just like alpha

waves were extracted before, yet another filter with the range of beta waves has been used here to obtain the above

seen figure.

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1889

All Rights Reserved © 2016 IJSETR

This screenshot shows the two dimensional plot of

valence and arousal values that have been calculated from

the EEG signal. There are four quadrants namely fear,happy,

sad and calm based on the values of valence and arousal.

When the point „x‟ falls in any of these quadrants, then that is the current emotion of the user whose EEG has been

recorded. This is displayed in a separate message box as

seen above.

The screenshot above shows the emotion of the user

through the membership functions that have been defined

using the fuzzy logic toolbox.

The screenshot above shows a message box that displays

a message which tells about the existing emotion of the user,

the nature of the emotion whether it is dominant or not and

based on these two, the emotion that the user may

experience in future.

This figure shows the surface view of the membership

functions that have been defined for predicting the emotions.

The graph above shows the various emotions of 10

participants A,B,C,D,E,F,G,H,I and J on hearing the news

that India won the cricket match against Australia.

This graph shows the new quadrants of emotion that the

user can experience based on the previous emotion. Five out

of ten participants namely B,F,G,H,and J can have a shift to

ISSN: 2278 – 7798 International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 5, Issue 6, June 2016

1890

All Rights Reserved © 2016 IJSETR

the new set of emotions respectively. This is because the

weights of those emotions namely calmness, happiness and

fear are high. Hence those emotions are dominant. Others

tend to remain in the same emotion as the weight is less and

hence termed as submissive.

VII. CONCLUSION It was not required previously to recognize emotions

when computers were used very rarely, just for computational purposes. But in today‟s scenario, most of the time that we spend, is with computers or associated devices. Such being the case, imparting knowledge about the emotional state of the user to the system can help the user to interact in a better way as the device which he or she is using will respond accordingly to the situation.

Sophisticated use of systems is not the only application of this work. It can also be used in the clinical field for assessing the emotional state of patients who are unable to convey it. It can also find a place in intelligent tutoring systems and related educational domains.

Extending this to prediction of future emotional states will improve accuracy and will absolutely have a wider range of applications. Thus emotion recognition when coupled with prediction will make the sphere of affective computing progress to its next level of advancement.

A. Summary of Findings

A total of 50 trials were conducted out of which 20 trials

were used for recognizing four different cases of emotion

and 30 trials were used for prediction of future emotional

states. Some trials did not yield the expected output. The

percentage of accuracy as far as only recognizing emotions

are concerned is estimated to be 92.64. The accuracy

percentage for emotion prediction is determined to be 87.50.

By combining these results, the percentage of accuracy of

the overall system has been calculated to be 90%.

B. Future Scope

The project has concentrated on finding emotions and predicting their dominance in future on a small scale basis

considering only a few emotions. So the further extension of

the project may try to work out with a large variety of

emotions including very intricate emotions.

This work though an enhancement of the existing two-

dimensional model, is still at the preliminary stage. A

trained data set obtained from a person coupled with the

model discussed here would help in accurate prediction of

emotional patterns of the person under test for a longer

period of time.

In addition to recognizing emotions, this work may also

be involved as a part of various applications that can be used in the field of forensics, medicine and education.

ACKNOWLEDGEMENT

I would like to express my sincere thanks to my

professor Dr. Matilda S for her valuable guidance and support for the successful completion of this work.

REFERENCES

[1] Agrafioti, Foteini, Dimitrios Hatzinakos, and Adam K.

Anderson, “ECG pattern analysis for emotion detection,” Affective Computing, IEEE Transactions, pp. 102-115,2012.

[2] Becker-Asano, Christian, and Ipke Wachsmuth, “Affective

computing with primary and secondary emotions in a virtual

human,” Autonomous Agents and Multi-Agent Systems , pp. 32-49, 2010.

[3] Becker, Christian, Stefan Kopp, and Ipke Wachsmuth,

“Simulating the emotion dynamics of a multimodal conversational agent,” Affective Dialogue Systems. Springer Berlin Heidelberg, pp. 154-165,2004.

[4] Bos, D. O, “EEG-based Emotion Recognition: The Influence of Visual and Auditory Stimuli,” 2014.

[5] Broekens, Joost, “In defense of dominance: PAD usage in

computational representations of affect,” International Journal of Synthetic Emotions (IJSE), pp. 33-42, 2012.

[6] Cowie, Roddy, et al, “Emotion recognition in human-

computer interaction,” Signal Processing Magazine, IEEE ,pp

32-80, 2001. [7] Eaton, Joel, Weiwei Jin, and Eduardo Miranda, “The Space

Between Us: A Live Performance with Musical Score Generated via Affective Correlates Measured in EEG of One Performer and an Audience Member,” NIME‟14 International Conference on New Interfaces for Musical Expression, 2014.

[8] Russell, James A., and Albert Mehrabian, “Evidence for a Three-Factor Theory of Emotions,” Journal of Research in Personality, Vol. 11, pp 273-294, 1977.

[9] Johnny R.J. Fontaine, Klaus R. Scherer, Etienne B. Roesch

and Phoebe C. Ellsworth, “The World of Emotions Is Not Two-Dimensional,” Association for Psychological Science ,Vol. 18 ,2007.

[10] Lewis, Michael, and Jeannette M. Haviland-Jones,

“Handbook of emotions,” Guilford Publications, 2000. [11] Nakasone, Arturo, Helmut Prendinger, and Mitsuru Ishizuka,

“Emotion recognition from electromyography and skin conductance,” Proc. of the 5th International Workshop on Biosignal Interpretation, 2005.

[12] Olguın, Frantz Bouchereau and Sergio Martınez, “Adaptive

notch filter for EEG signals based on the LMS algorithm with variable step-size parameter,” Proceedings of the 39th International Conference on Information Sciences and Systems, 2005.

[13] Teplan, Michal, “Fundamentals of EEG measurement,”

Measurement science review, pp. 1-11, 2002.

[14] Wang, Yongjin, and Ling Guan, “Recognizing human

emotional state from audiovisual signals*,” Multimedia, IEEE Transactions, pp. 936-946, 2008.