Brief Lecture On Sentiment Analysis

19
Brief Lecture on Sentiment Analysis Case Study - How Twitter Users Feel About Nigerian’s 2015 Presidential Candidates Deolu Adeleye Welcome to Sentiment Analysis! One of the most fascinating and attractive reasons for delving into the field of artificial intelligence and machine learning is this: it is quite VAST! There is really no sector of life where practical applications can not be studied and applied, as evidenced in our day to day lives today: in medicine, music, engineering. . . the list is endless! Today, we look at applying some of these techniques in the area of society and politics. “Out of the abundance of the heart, the mouth speaks. . . ” It’s election season in Nigeria, and as usual, there are a lot of polls and predictions of who will win, where they’ll shine, and so forth. However, is it possible to go a step further? Could we, rather than just trying to figure out who will win, perhaps also figure out how people actually FEEL about candidates? Based on what people are saying, can we point to what precise emotion each evokes from their followers? Short answer: YES! And this case study aims to demonstrate this. . . How it works There are MANY (more complicated) algorithms available for estimating the sentiment of a particular text or conversation. However, for the purpose of this brief lecture, and sort of just as a proof of concept, I implemented the simple naive Bayes classifier for polarity: it is trained on the combination of Janyce Wiebe’s subjectivity lexicon and Bing Liu’s subjectivity lexicon, and will polarize words as being ‘negative’ or ‘positive’ for emotions: it is trained on Carlo Strapparava and Alessandro Valitutti’s emotions lexicon as falling into one of the following categories: ‘anger’, ‘disgust’, ‘joy’, ‘surprise’, ‘fear’ and ‘sadness’ Based on the mixture of positive and negative words, the tweet will be given a value within the range of -5 (very negative) and +5 (very positive) Some quick examples: 1. the words ‘terrific’ and ‘warmhearted’ • will be polarized as ‘positive’ by our classifier • will be classified under ‘surprise’ and ‘joy’ respectively 1

Transcript of Brief Lecture On Sentiment Analysis

Brief Lecture on Sentiment Analysis

Case Study - How Twitter Users Feel About Nigerian’s 2015 Presidential Candidates

Deolu Adeleye

Welcome to Sentiment Analysis!

One of the most fascinating and attractive reasons for delving into the field of artificial intelligence and machine learning isthis: it is quite VAST! There is really no sector of life where practical applications can not be studied and applied, asevidenced in our day to day lives today: in medicine, music, engineering. . . the list is endless!

Today, we look at applying some of these techniques in the area of society and politics.

“Out of the abundance of the heart, the mouth speaks. . . ”

It’s election season in Nigeria, and as usual, there are a lot of polls and predictions of who will win, where they’ll shine,and so forth.

However, is it possible to go a step further? Could we, rather than just trying to figure out who will win, perhaps alsofigure out how people actually FEEL about candidates? Based on what people are saying, can we point to what preciseemotion each evokes from their followers?

Short answer: YES! And this case study aims to demonstrate this. . .

How it works

There are MANY (more complicated) algorithms available for estimating the sentiment of a particular text or conversation.

However, for the purpose of this brief lecture, and sort of just as a proof of concept, I implemented the simple naiveBayes classifier

• for polarity: it is trained on the combination of Janyce Wiebe’s subjectivity lexicon and Bing Liu’s subjectivity lexicon,and will polarize words as being ‘negative’ or ‘positive’

• for emotions: it is trained on Carlo Strapparava and Alessandro Valitutti’s emotions lexicon as falling into one of thefollowing categories: ‘anger’, ‘disgust’, ‘joy’, ‘surprise’, ‘fear’ and ‘sadness’

Based on the mixture of positive and negative words, the tweet will be given a value within the range of -5 (very negative)and +5 (very positive)

Some quick examples:

1. the words ‘terrific’ and ‘warmhearted’

• will be polarized as ‘positive’ by our classifier• will be classified under ‘surprise’ and ‘joy’ respectively

1

2. the words ‘obscene’ and ‘weeping’

• will be polarized as ‘negative’• classified under ‘disgust’ and ‘sadness’ respectively.

Let’s look at a more robust example. Look at the sentence below:

“Oh, you only eat ‘fresh’ donuts. . .my! How healthy your diet is!”

Of course, anyone reading the above will obviously note the tinge of sarcasm, but the algorithm will pick out the words‘fresh’, ‘healthy’ and ‘diet’ - all positive words - and rate this sentence as being quite positive.

It should be concluded by now that this isn’t a perfect classifier, hence it being aptly named a NAIVE algorithm.

Well, despite its simplicity (or naivete, if you will), we are still able to obtain quite interesting and even useful results whenit is applied. We can assume, as one of the reasons, that this is because sarcasm such as the above account for a very smallpercentage of all conversation in real life and on social media.

The Presidential Candidates

In alphabetical order:

1. Adebayo, Ayeni Male African Peoples Alliance (APA)2. Ahmad, Mani Male African Democratic Congress (ADC)3. Anifowose-Kelani, Tunde Male Action Alliance (AA)4. Buhari, Muhammadu Male All Progressives Congress (APC)5. Chinedu, Allagoa Male Peoples Party of Nigeria (PPN)6. Eke, Sam Male Citizens Popular Party (CPP)7. Galadima, Ganiyu Male Allied Congress Party of Nigeria (ACPN)8. Jonathan, Goodluck Male Peoples Democratic Party (PDP)9. Okorie, Chekwas Male United Progressive Party (UPP)10. Okoye, Godson Male United Democratic Party (UDP)11. Onovo, Martin Male National Conscience Party (NCP)12. Owuru, Ambrose Male Hope Democratic Party (HDP)13. Salau, Rafiu Male Alliance For Democracy (AD)14. Sonaiya, Oluremi Female KOWA Party (KP)

Installing Necessary Packages

We’ll be using R and RStudio (which can be downloaded from here and here respectively).

Once those two are downloaded and installed, we need to download the following packages to achieve our analysis:

• “dplyr”• “plyr”• “ggplot2”• “devtools”• “NLP”• “tm”• “SnowballC”• “RWeka”

2

• “stringr”• “twitteR”• “Rstem”• “RColorBrewer”• “sentiment”

Let’s get them.

#check if necessary packages are installed.#If they are, load them; else, install and load them.reqd_pkgs <- c("dplyr", "plyr", "ggplot2", "devtools",

"NLP", "tm", "SnowballC", "RWeka", "Rstem","RColorBrewer", "stringr")

for (i in 1:length(reqd_pkgs)){

if (! (reqd_pkgs[i] %in% rownames(installed.packages()))){

paste("Package '",reqd_pkgs[i],"' not installed. Installing...", sep="")install.packages(reqd_pkgs[i])library(reqd_pkgs[i], character.only=T)

}else library(reqd_pkgs[i], character.only=T)

}

twitteR will be downloaded from creator Jeff Gentry’s GitHub repo, instead of CRAN (author’s repos are usually moreup to date)

devtools::install_github("geoffjentry/twitteR")#load once downloadedlibrary(twitteR)

(If that doesn’t work for you, you can also run

install.packages(“twitteR”)

from good ol’ CRAN. )

Next, we install the sentiment package. It is archived on CRAN, so we’ll have to download the tar.gz file and install:

#download 'Sentiment' package from CRAN archivedownload.file(url="http://cran.r-project.org/src/contrib/Archive/sentiment/sentiment_0.2.tar.gz",

destfile="sentiment_0.2.tar.gz")install.packages("sentiment_0.2.tar.gz", repos = NULL, type = "source")#load the librarylibrary(sentiment)

Connecting to Twitter

To retrieve tweets, you’ll have to setup a twitter API app.

The first step is to create a Twitter application for yourself. Go to https://twitter.com/apps/new and setup an account/login. After filling in the basic info, go to the “Settings” tab and select “Read, Write and Access direct messages”. Make sureto click on the save button after doing this. In the “Details” tab, take note of the following:

• your consumer key

3

• your consumer secret• your access token• your access secret

Once these four are retrieved, simply insert them into the setup_twitter_oauth function in the format

setup_twitter_oauth(“API key”, “API secret”, “Access token”, “Access secret”)

Here’s ours with the according values inserted:

#authenticatesetup_twitter_oauth(our_key,

our_secret,our_token,our_access_secret)

You only need to authenticate once per R session.

Retrieving Tweets

So, we’ve authenticated. Next, we have to retrieve our tweets.

For this step, we’ll be retrieving tweets based on some keywords associated with each candidate, such as

• their political party• their names• their party slogan for these elections• mentions of their twitter handles

You can do this for every single presidential candidate, but for this lecture, I’ll just be doing for the three main contenders. . .. . . or, put another way: the only three that people were talking about. . . on Twitter anyway * insert awkward silence *.

So that I won’t overwhelm this lecture with code, I’ll only reproduce for the first candidate on the list here (don’t forget:the full code is available on my GitHub repo here ):

#buharikword1 = searchTwitter("APC+buhari+gmb+change",n=1500,since="2015-03-22")#convert to data framekword1 <- twListToDF(kword1)#get only textkword1 <- kword1$text

kword2 <- searchTwitter("#APC",n=1500,since="2015-03-22")kword2 <- twListToDF(kword2)#get only textkword2 <- kword2$text#remove duplicate tweetskword2 <- kword2[!kword2 %in% kword1]

kword3 <- searchTwitter("@GMB",n=1500,since="2015-03-22")kword3 <- twListToDF(kword3)#get only textkword3 <- kword3$text#remove duplicate tweets

4

kword3 <- kword3[!kword3 %in% kword1]kword3 <- kword3[!kword3 %in% kword2]

kword4 <- searchTwitter("#change+buhari",n=1500,since="2015-03-22")kword4 <- twListToDF(kword4)#get only textkword4 <- kword4$text#remove duplicate tweetskword4 <- kword4[!kword4 %in% kword1]kword4 <- kword4[!kword4 %in% kword2]kword4 <- kword4[!kword4 %in% kword3]

kword5 <- searchTwitter("buhari",n=1500,since="2015-03-22")kword5 <- twListToDF(kword5)#get only textkword5 <- kword5$text#remove duplicate tweetskword5 <- kword5[!kword5 %in% kword1]kword5 <- kword5[!kword5 %in% kword2]kword5 <- kword5[!kword5 %in% kword3]kword5 <- kword5[!kword5 %in% kword4]

kword6 <- searchTwitter("@ThisIsBuhari",n=1500,since="2015-03-22")kword6 <- twListToDF(kword6)#get only textkword6 <- kword6$text#remove duplicate tweetskword6 <- kword6[!kword6 %in% kword1]kword6 <- kword6[!kword6 %in% kword2]kword6 <- kword6[!kword6 %in% kword3]kword6 <- kword6[!kword6 %in% kword4]kword6 <- kword6[!kword6 %in% kword5]

kword7 <- searchTwitter("@APCNigeria",n=1500,since="2015-03-22")kword7 <- twListToDF(kword7)#get only textkword7 <- kword7$text#remove duplicate tweetskword7 <- kword7[!kword7 %in% kword1]kword7 <- kword7[!kword7 %in% kword2]kword7 <- kword7[!kword7 %in% kword3]kword7 <- kword7[!kword7 %in% kword4]kword7 <- kword7[!kword7 %in% kword5]kword7 <- kword7[!kword7 %in% kword6]

#concatenate all into onebuhari_tweets<-c(kword1,kword2,kword3,kword4,kword5,kword6,kword7)

I used keywords associated with their campaign. For Jonathan, keywords such as

• “#GEJ”• “Transformation Agenda”• “#PDP”• “Forward Nigeria”

and for Sonaiya, I followed pretty much the same protocol, only in her case I added the keywords ‘Female+President’,because it’s unique to her.

I made sure no one went above 7 keywords though. . .

5

You can store the tweets by writing them to csv files, or any of the other formats provided in R:

write.csv(jonathan_tweets, "jonathan_tweets.csv", row.names=F)write.csv(buhari_tweets, "buhari_tweets.csv", row.names=F)write.csv(sonaiya_tweets, "sonaiya_tweets.csv", row.names=F)

Note: should you decide to do for other presidential candidates, don’t be surprised if some tweet queries return ‘NULL’ -remember that it just simply means no one is talking about them on Twitter. . . * awkward silence *.

Processing Tweets

Once our tweets have been retrieved, we’ll need to do some processing, such as

• remove ‘@ people’, so we’re dealing with only words• remove punctuations, numbers• remove html links• remove ‘stopwords’: words such as ‘I’, ‘you’, ‘me’, ‘the’ have no emotional/polar value in our case, and can be

discarded in order not to skew our results.• remove unnecessary spaces

Here’s an example shown for only the buhari tweets case. . . do run same codes for the others.

# remove all unnecessary charaters from the tweetsbuhari_txt = gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", buhari_tweets)# remove '@ people'buhari_txt = gsub("@\\w+", "", buhari_txt)# remove punctuationbuhari_txt = gsub("[[:punct:]]", "", buhari_txt)# remove numbersbuhari_txt = gsub("[[:digit:]]", "", buhari_txt)# remove html linksbuhari_txt = gsub("http\\w+", "", buhari_txt)# remove unnecessary spacesbuhari_txt = gsub("[ \t]{2,}", "", buhari_txt)buhari_txt = gsub("^\\s+|\\s+$", "", buhari_txt)

# convert to lowercase# define "tolower error handling" functiontry.error = function(x){

# create missing valuey = NA# tryCatch errortry_error = tryCatch(tolower(x),

error=function(e) e)# if not an errorif (!inherits(try_error, "error"))

y = tolower(x)# resultreturn(y)

}#lower case using try.error with sapply

6

buhari_txt = sapply(buhari_txt, try.error)

#remove stopwordsbuhari_txt <- buhari_txt[!buhari_txt %in% stopwords("SMART")]

# remove NAs in buhari_txtbuhari_txt = buhari_txt[!is.na(buhari_txt)]names(buhari_txt) = NULL

Here’s a sample of what our results look like:

head(buhari_txt)

## [1] "the time has comechange has come vote for mevote apcvote for changegmbhttp..."## [2] "from the way buhari speaks you will know he is the right man to change nigeria thisisbuhari gmb apc"## [3] "the time has comechange has come vote for mevote apcvote for changegmbhttp..."## [4] "the time has comechange has come vote for mevote apcvote for changegmbhttp..."## [5] "the time has comechange has come vote for mevote apcvote for changegmbhttp..."## [6] "mai malafa ya karaya nigeria sai baba buhari apc saibuharichange gmbpyo"

Polarity and Emotions

For this step, we’ll simply be using the classify_emotion and classify_polarity functions from the sentiment package.

However, I noticed that the default lexicons in these functions were not adequate. So, I modified them (see my GitHubrepo for the full source file)

# classify polarity with our new classify polarity function

#download tweets, lexicons dataset and functionsdownload.file(url="https://www.dropbox.com/s/mqdpdkz7cywklby/sentiment-analysis.zip?dl=1",

destfile="sentiment-analysis.zip")# unzip itunzip(zipfile="sentiment-analysis.zip",

exdir = "sentiment-analysis")

#set working directory to newly downloaded foldersetwd("sentiment-analysis")

#load functionssource("sentiment-analysis/new_classify_polarity.R")source("sentiment-analysis/score.sentiment.R")

#you can also change the algorithm here to 'voter'. Try it and see the resultsclass_pol = new_classify_polarity(buhari_txt, algorithm="bayes")# get polarity best fitpolarity = class_pol[,4]

# classify emotionclass_emo = classify_emotion(buhari_txt, algorithm="bayes", prior=1.0)# get emotion best fitemotion = class_emo[,7]# substitute NA's by "unknown"emotion[is.na(emotion)] = "unknown"

7

#put all results into a dataframebuhari_df = data.frame(text=buhari_txt,

emotion=emotion,polarity=polarity,Candidate=rep("buhari",length(buhari_txt)),stringsAsFactors=FALSE)

# sort data framebuhari_df = within(buhari_df,

emotion <- factor(emotion,levels=names(sort(table(emotion),decreasing=TRUE))))

We’ll also be creating a function to place a sentiment score on each tweet:

#thanks to Jeffrey Bean for this function!score.sentiment = function(sentences, pos.words, neg.words, .progress='none'){

require(plyr)require(stringr)

scores = laply(sentences, function(sentence, pos.words, neg.words){word.list = str_split(sentence, '\\s+')# sometimes a list() is one level of hierarchy too muchwords = unlist(word.list)# compare our words to the dictionaries of positive & negative termspos.matches = match(words, pos.words)neg.matches = match(words, neg.words)# match() returns the position of the matched term or NA# we just want a TRUE/FALSE:pos.matches = !is.na(pos.matches)neg.matches = !is.na(neg.matches)# and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():score = sum(pos.matches) - sum(neg.matches)return(score)}, pos.words, neg.words, .progress=.progress )

scores.df = data.frame(score=scores, text=sentences)return(scores.df)

}

With that, we can get the sentiment score for each candidate:

#load lexiconssubjectivity <- read.csv(system.file("data/subjectivity.csv.gz",

package="sentiment"),header=F,stringsAsFactors=F)

names(subjectivity)<-c("word","strength","polarity")

pos.words <- read.table("sentiment-analysis/positive-words.txt", stringsAsFactors=F, skip=35)pos.words[,2] <- rep("subj", nrow(pos.words))pos.words[,3] <- rep("positive", nrow(pos.words))names(pos.words)<-names(subjectivity)neg.words <- read.table("sentiment-analysis/negative-words.txt", stringsAsFactors=F, skip=35)neg.words[,2] <- rep("subj", nrow(neg.words))neg.words[,3] <- rep("negative", nrow(neg.words))names(neg.words)<-names(subjectivity)#mergesubjectivity <- rbind(subjectivity,pos.words,neg.words)

8

# sentiment scorebuhari = score.sentiment(buhari_txt,

subjectivity$word[subjectivity$polarity=="positive"],subjectivity$word[subjectivity$polarity=="negative"])

buhari$Candidate = rep("buhari",nrow(buhari))

In case you’re having trouble, here’s this whole process done for the other candidates as well:

#jonathan casejonathan_txt = gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", jonathan_tweets)# remove at peoplejonathan_txt = gsub("@\\w+", "", jonathan_txt)# remove punctuationjonathan_txt = gsub("[[:punct:]]", "", jonathan_txt)# remove numbersjonathan_txt = gsub("[[:digit:]]", "", jonathan_txt)# remove html linksjonathan_txt = gsub("http\\w+", "", jonathan_txt)# remove unnecessary spacesjonathan_txt = gsub("[ \t]{2,}", "", jonathan_txt)jonathan_txt = gsub("^\\s+|\\s+$", "", jonathan_txt)

# define "tolower error handling" functiontry.error = function(x){

# create missing valuey = NA# tryCatch errortry_error = tryCatch(tolower(x),

error=function(e) e)# if not an errorif (!inherits(try_error, "error"))

y = tolower(x)# resultreturn(y)

}#lower case using try.error with sapplyjonathan_txt = sapply(jonathan_txt, try.error)

#remove stopwordsjonathan_txt <- jonathan_txt[!jonathan_txt %in% stopwords("SMART")]

# remove NAs in jonathan_txtjonathan_txt = jonathan_txt[!is.na(jonathan_txt)]names(jonathan_txt) = NULL

# classify emotionclass_emo = classify_emotion(jonathan_txt, algorithm="bayes", prior=1.0)# get emotion best fitemotion = class_emo[,7]# substitute NA's by "unknown"emotion[is.na(emotion)] = "unknown"

# classify polarityclass_pol = new_classify_polarity(jonathan_txt, algorithm="bayes")# get polarity best fitpolarity = class_pol[,4]

9

jonathan_df = data.frame(text=jonathan_txt,emotion=emotion,polarity=polarity,Candidate=rep("jonathan",length(jonathan_txt)),stringsAsFactors=FALSE)

# sort data framejonathan_df = within(jonathan_df,

emotion <- factor(emotion,levels=names(sort(table(emotion),

decreasing=TRUE))))

# sentiment scorejonathan = score.sentiment(jonathan_txt,

subjectivity$word[subjectivity$polarity=="positive"],subjectivity$word[subjectivity$polarity=="negative"])

jonathan$Candidate = rep("jonathan",nrow(jonathan))

#sonaiya casesonaiya_txt = gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", sonaiya_tweets)# remove at peoplesonaiya_txt = gsub("@\\w+", "", sonaiya_txt)# remove punctuationsonaiya_txt = gsub("[[:punct:]]", "", sonaiya_txt)# remove numberssonaiya_txt = gsub("[[:digit:]]", "", sonaiya_txt)# remove html linkssonaiya_txt = gsub("http\\w+", "", sonaiya_txt)# remove unnecessary spacessonaiya_txt = gsub("[ \t]{2,}", "", sonaiya_txt)sonaiya_txt = gsub("^\\s+|\\s+$", "", sonaiya_txt)

# define "tolower error handling" functiontry.error = function(x){

# create missing valuey = NA# tryCatch errortry_error = tryCatch(tolower(x),

error=function(e) e)# if not an errorif (!inherits(try_error, "error"))

y = tolower(x)# resultreturn(y)

}#lower case using try.error with sapplysonaiya_txt = sapply(sonaiya_txt, try.error)

#remove stopwordssonaiya_txt <- sonaiya_txt[!sonaiya_txt %in% stopwords("SMART")]

# remove NAs in sonaiya_txtsonaiya_txt = sonaiya_txt[!is.na(sonaiya_txt)]names(sonaiya_txt) = NULL

# classify emotionclass_emo = classify_emotion(sonaiya_txt, algorithm="bayes", prior=1.0)# get emotion best fit

10

emotion = class_emo[,7]# substitute NA's by "unknown"emotion[is.na(emotion)] = "unknown"

# classify polarityclass_pol = new_classify_polarity(sonaiya_txt, algorithm="bayes")# get polarity best fitpolarity = class_pol[,4]

sonaiya_df = data.frame(text=sonaiya_txt,emotion=emotion,polarity=polarity,Candidate=rep("sonaiya",length(sonaiya_txt)),stringsAsFactors=FALSE)

# sort data framesonaiya_df = within(sonaiya_df,

emotion <- factor(emotion,levels=names(sort(table(emotion),decreasing=TRUE))))

# sentiment scoresonaiya = score.sentiment(sonaiya_txt,

subjectivity$word[subjectivity$polarity=="positive"],subjectivity$word[subjectivity$polarity=="negative"])

sonaiya$Candidate = rep("sonaiya",nrow(sonaiya))

Merging Results

#merge all the emotions and polaritiessentiments <- rbind(buhari_df,jonathan_df,sonaiya_df)#merge all the sentiment scoresresults <- rbind(buhari,jonathan,sonaiya)

With our results merged, it’s time to view them!

#polarityggplot(sentiments, aes(x=polarity)) +

geom_bar(aes(y=..count.., fill=polarity), position="dodge") +scale_fill_brewer(palette="Dark2") +labs(x="Polarity categories", y="Number of Tweets") +facet_grid(.~Candidate)

#Emotionsggplot(sentiments, aes(x=emotion)) +

geom_bar(aes(y=..count.., fill=emotion), position="dodge") +scale_fill_brewer(palette="Dark2") +labs(x="Emotion categories", y="Number of Tweets", title="Emotions Evoked") +facet_grid(.~Candidate)

#Total/Cumulative Scoretotal_buhari <- data.frame(score=sum(buhari$score),

Candidate=rep("Buhari",length(sum(buhari$score))))total_jonathan <- data.frame(score=sum(jonathan$score),

Candidate=rep("Jonathan",length(sum(jonathan$score))))total_sonaiya <- data.frame(score=sum(sonaiya$score),

Candidate=rep("Sonaiya",length(sum(sonaiya$score))))

11

#plot!ggplot() +

geom_bar(data=total_buhari,mapping=aes(x=Candidate, y=score),binwidth=10, position="dodge",stat="identity", fill="red") +

geom_bar(data=total_jonathan, mapping=aes(x=Candidate, y=score),binwidth=10, position="dodge",stat="identity", fill="yellow") +

geom_bar(data=total_sonaiya, mapping=aes(x=Candidate, y=score),binwidth=10, position="dodge",stat="identity", fill="green") +

labs(x="Candidate", y="Score", title="Total Sentiment Scores Tallied")

And that’s a wrap! Have fun applying these to other areas! :D ;)

12

Buhari Jonathan Sonaiya

0

2000

4000

6000

negative neutral positive negative neutral positive negative neutral positivePolarity categories

Num

ber

of T

wee

ts bayes_polarity

negative

neutral

positive

Using Naive Bayes Algorithm

Buhari Jonathan Sonaiya

0

1000

2000

3000

negative neutral positive negative neutral positive negative neutral positivePolarity categories

Num

ber

of T

wee

ts voter_polarity

negative

neutral

positive

Using Simple Voter Algorithm

0

1000

2000

3000

0

1000

2000

3000

0

1000

2000

3000

Buhari

JonathanS

onaiya

−4 0 4 8Scores

Num

ber

of T

wee

ts Candidate

Buhari

Jonathan

Sonaiya

Sentiment Scores Per Candidate

0

1000

2000

3000

4000

Buhari Jonathan SonaiyaCandidate

Sco

reTotal Sentiment Scores Tallied

Briefly About The Author

Deolu Adeleye is a data scientist and statistician, and is skilled in a number of programming languages.

He currently consults and facilitates trainings on machine learning and artificial intelligence concepts, and with practicalapplications and projects executed, is quite conversant with a lot of related algorithms.

He is also a professional editor and writer, and has published a number of books, and even wrote and self-published one,“A (Short) Study On Humility” (available here)

He runs a blog, ‘Deolu Blogs Here’, where he writes (very) short stories, (some) life lessons, as well as the (occasional)mischievousness. . .

Contact: [email protected]

(for more info, see LinkedIn Profile)

1