Research Article Opinion Mining from Online User Reviews...

10
Research Article Opinion Mining from Online User Reviews Using Fuzzy Linguistic Hedges Mita K. Dalal 1 and Mukesh A. Zaveri 2 1 Information Technology Department, Sarvajanik College of Engineering & Technology, Surat 395001, India 2 Computer Engineering Department, S. V. National Institute of Technology, Surat 395007, India Correspondence should be addressed to Mita K. Dalal; [email protected] Received 29 August 2013; Accepted 1 January 2014; Published 20 February 2014 Academic Editor: Sebastian Ventura Copyright © 2014 M. K. Dalal and M. A. Zaveri. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Nowadays, there are several websites that allow customers to buy and post reviews of purchased products, which results in incremental accumulation of a lot of reviews written in natural language. Moreover, conversance with E-commerce and social media has raised the level of sophistication of online shoppers and it is common practice for them to compare competing brands of products before making a purchase. Prevailing factors such as availability of online reviews and raised end-user expectations have motivated the development of opinion mining systems that can automatically classify and summarize users’ reviews. is paper proposes an opinion mining system that can be used for both binary and fine-grained sentiment classifications of user reviews. Feature-based sentiment classification is a multistep process that involves preprocessing to remove noise, extraction of features and corresponding descriptors, and tagging their polarity. e proposed technique extends the feature-based classification approach to incorporate the effect of various linguistic hedges by using fuzzy functions to emulate the effect of modifiers, concentrators, and dilators. Empirical studies indicate that the proposed system can perform reliable sentiment classification at various levels of granularity with high average accuracy of 89% for binary classification and 86% for fine-grained classification. 1. Introduction In the present age, it has become common practice for people to communicate or express their opinions and feedbacks on various aspects affecting their daily life through some form of social media. An upsurge in online activities like blogging, social networking, emailing, review posting, and so forth has resulted in incremental accumulation of a lot of user-generated content. Most of these online interactions are in the form of natural language text. is in turn has led to increased research interest in content-organization and knowledge engineering tasks such as automatic classification, summarization, and opinion mining from web-based data. Due to its high commercial importance, mining and summarizing of user reviews are a widely studied application [19]. e two main tasks involved in opinion mining regardless of the application are (1) identification of opinion- bearing phrases/sentences from free text and (2) tagging the sentiment polarity of opinionated phrases. e descriptors such as adjectives or adverbs describing the features present in an opinion sentence mainly indicate the polarity of the expressed opinion. However, the strength and polarity of the opinionated phrases are also affected by the presence of linguistic hedges such as modifiers (e.g., “not”), concentrators (e.g., “very,” “extremely”), and dilators (e.g., “quite,” “almost,” and “nearly”). Zadeh developed the concept of fuzzy linguis- tic variables and linguistic hedges that modify the meaning and intensity of their operands [10, 11]. Recent papers in this field have also pointed out that the task of opinion mining is sensitive to such hedges and taking the effect of linguistic hedges into consideration can improve the efficiency of the sentiment classification task [8, 1217]. In this paper, we have proposed an approach to per- form fine-grained sentiment classification of online product reviews by incorporating the effect of fuzzy linguistic hedges on opinion descriptors. We have proposed novel fuzzy func- tions that emulate the effect of different linguistic hedges and incorporated them in the sentiment classification task. Our opinion mining system involves various phases like (1) preprocessing phase, (2) feature-set generation phase, Hindawi Publishing Corporation Applied Computational Intelligence and So Computing Volume 2014, Article ID 735942, 9 pages http://dx.doi.org/10.1155/2014/735942

Transcript of Research Article Opinion Mining from Online User Reviews...

Page 1: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

Research ArticleOpinion Mining from Online User Reviews Using FuzzyLinguistic Hedges

Mita K. Dalal1 and Mukesh A. Zaveri2

1 Information Technology Department, Sarvajanik College of Engineering & Technology, Surat 395001, India2 Computer Engineering Department, S. V. National Institute of Technology, Surat 395007, India

Correspondence should be addressed to Mita K. Dalal; [email protected]

Received 29 August 2013; Accepted 1 January 2014; Published 20 February 2014

Academic Editor: Sebastian Ventura

Copyright © 2014 M. K. Dalal and M. A. Zaveri.This is an open access article distributed under the Creative CommonsAttributionLicense, which permits unrestricted use, distribution, and reproduction in anymedium, provided the originalwork is properly cited.

Nowadays, there are several websites that allow customers to buy and post reviews of purchased products, which results inincremental accumulation of a lot of reviews written in natural language. Moreover, conversance with E-commerce and socialmedia has raised the level of sophistication of online shoppers and it is common practice for them to compare competing brands ofproducts before making a purchase. Prevailing factors such as availability of online reviews and raised end-user expectations havemotivated the development of opinion mining systems that can automatically classify and summarize users’ reviews. This paperproposes an opinion mining system that can be used for both binary and fine-grained sentiment classifications of user reviews.Feature-based sentiment classification is a multistep process that involves preprocessing to remove noise, extraction of features andcorresponding descriptors, and tagging their polarity. The proposed technique extends the feature-based classification approachto incorporate the effect of various linguistic hedges by using fuzzy functions to emulate the effect of modifiers, concentrators,and dilators. Empirical studies indicate that the proposed system can perform reliable sentiment classification at various levels ofgranularity with high average accuracy of 89% for binary classification and 86% for fine-grained classification.

1. Introduction

In the present age, it has become common practice for peopleto communicate or express their opinions and feedbackson various aspects affecting their daily life through someform of social media. An upsurge in online activities likeblogging, social networking, emailing, review posting, andso forth has resulted in incremental accumulation of a lotof user-generated content. Most of these online interactionsare in the form of natural language text. This in turn hasled to increased research interest in content-organization andknowledge engineering tasks such as automatic classification,summarization, and opinion mining from web-based data.

Due to its high commercial importance, mining andsummarizing of user reviews are a widely studied application[1–9]. The two main tasks involved in opinion miningregardless of the application are (1) identification of opinion-bearing phrases/sentences from free text and (2) tagging thesentiment polarity of opinionated phrases. The descriptorssuch as adjectives or adverbs describing the features present

in an opinion sentence mainly indicate the polarity of theexpressed opinion. However, the strength and polarity ofthe opinionated phrases are also affected by the presence oflinguistic hedges such asmodifiers (e.g., “not”), concentrators(e.g., “very,” “extremely”), and dilators (e.g., “quite,” “almost,”and “nearly”). Zadeh developed the concept of fuzzy linguis-tic variables and linguistic hedges that modify the meaningand intensity of their operands [10, 11]. Recent papers in thisfield have also pointed out that the task of opinion miningis sensitive to such hedges and taking the effect of linguistichedges into consideration can improve the efficiency of thesentiment classification task [8, 12–17].

In this paper, we have proposed an approach to per-form fine-grained sentiment classification of online productreviews by incorporating the effect of fuzzy linguistic hedgeson opinion descriptors. We have proposed novel fuzzy func-tions that emulate the effect of different linguistic hedges andincorporated them in the sentiment classification task.

Our opinion mining system involves various phases like(1) preprocessing phase, (2) feature-set generation phase,

Hindawi Publishing CorporationApplied Computational Intelligence and So ComputingVolume 2014, Article ID 735942, 9 pageshttp://dx.doi.org/10.1155/2014/735942

Page 2: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

2 Applied Computational Intelligence and Soft Computing

and (3) fuzzy opinion classification phase based on fuzzylinguistic hedges. Empirical studies indicate that our sen-timent mining approach can be successfully applied forbinary as well as fine-grained sentiment classification of userreviews. In binary sentiment classification the reviews areclassified into two output classes “positive” and “negative.”In fine-grained classification the reviews are classified intomultiple output classes as “very positive,” “positive,” “neutral,”“negative,” and “very negative.” Our opinion mining systemcan also be used to generate comparative summaries ofsimilar products [8]. Moreover the proposed fuzzy functionsfor emulating linguistic hedges give better accuracy thanother contemporary approaches for hedge adjustment [13,14, 16]. Note that while there are several recent papers onuser review classification, there are relatively few whichhave explicitly proposed approaches to integrate the effect oflinguistic hedges [13, 14, 16, 17].

The rest of the paper is organized as follows. Section 2surveys related work done in the area of opinion mining,Section 3 describes the proposed framework for opinionmining using linguistic hedges, and Section 4 discusses theempirical evaluation of the proposed strategy and obtainedresults. Finally, we conclude and give directions for futurework in this field.

2. Related Work

Mining opinions from user-generated reviews are a specialapplication of natural language processing that requiresautomatic classification and summarization of text.

Automatic text classification is usually done by usinga prelabeled training set and applying various machinelearning methods such as Naıve Bayesian [18], supportvector machines [19], Artificial Neural Networks [20], orhybrid approaches [21, 22] that combine various machinelearning methods to improve the efficiency of classification.Approaches to automatic text summarization have mainlyfocused on extracting significant sentences from text andcan be broadly classified into four categories: (1) heuristics-based approaches that rely on a combination of heuristicssuch as cue/key/title/position [23–26], (2) semantics-basedapproaches such as lexical chains [27] and rhetorical parsing[28], (3) user query-oriented approaches typically used forretrieval in search engines and question-answering applica-tions based on metrics such as maximal marginal relevance(MMR) [29, 30], and (4) cluster-oriented approaches thatform clusters of sentences based on sentence similaritycomputations and then extract the central sentence of eachcluster to include in the summary [31].

However the task of summarizing or classifying thesentiment reflected in users’ opinions is quite different fromthe text mining approaches mentioned above. It is notfocused on generating extractive summaries or classifyingentire documents based on topic-indicative words. Instead,sentiment mining involves such tasks as semantic feature-setgeneration, identifying opinion words (usually adjectives oradverbs) and associating them with corresponding features,determining the polarity of the feature-opinion pairs, and

finally aggregating the mined results to detect overall senti-ment [1–9, 12, 32–34]. Users’ opinions are usually expressedinformally in natural language and frequently contain errorsin spelling and grammar. So, they require a lot of pre-processing to generate clean text [8, 12]. Moreover, featureextraction from user reviews requires language-dependentsemantic processing such as parts-of-speech (POS) tagging[1, 5, 8, 12, 33] in addition to statistical frequency analysis.The POS tagging is usually done using a linguistic parser. Forexample, the link grammar parser [8, 35] and Stanford parser[12, 36] are well-known linguistic parsers. The nouns andnoun phrases tagged by the parser become initial candidatefeatures. Various approaches are used to extract the feature setuseful for opinionmining.These approaches include frequentitemset identification using Apriori algorithm [1, 3, 5, 7, 32,37], seed-set expansion using an initial seed set of features[12, 33], or multiwords-based [8, 38–41] frequent featureextraction.

Feature descriptors such as adjectives or adverbs aremainly indicative of the polarity of an opinion phrase. In[42], the authors proposed amethod to determine orientationof adjectives based on conjunctions between adjectives.Statistical measures of word association such as pointwisemutual information (PMI) and latent semantic association(LSA) have also been used to determine semantic orientation[34, 43]. Another approach is to use a seed list of opinionwords with previously known orientations [1, 12] and expandit based on lookup of synonyms and antonyms using somelexical resource like WordNet [44]. This approach is basedon the observation that synonyms have similar orientationsand antonyms have opposite orientations. It is important tonote that the semantic orientation of a descriptor can differdepending on it usage. So, it is not enough to use corpusstatistics to assign a fixed polarity to each descriptor. Hence,several recent papers have used the SentiWordNet tool [45]to determine opinion polarity [2, 8, 16, 33]. The advantage ofusing SentiWordNet is that it lists out all usages of a descriptorand assigns them a corresponding sentiment orientationtriplet which indicates their positivity/objectivity/negativityscores.

In addition to opinion descriptors such as adjectives oradverbs, the orientation (i.e., polarity) and strength of anopinion phrase are sensitive to the presence of linguistichedges [10, 14, 46]. Some authors refer to linguistic hedges ascontextual valence shifters and have demonstrated that theycan affect the valence (polarity) of a linguistic phrase [13, 14].Moreover it has been shown that the accuracy of opinionclassification can be improved by augmenting the simplepositive/negative term counting approach to incorporate theeffect of such hedges [13, 14]. In [16], the authors have useda hybrid scoring technique based on linear combination ofPMI, SentiWordNet, and manually assigned scores to derivethe initial sentiment value of an opinionated phrase, whichis then adjusted using fuzzy functions when hedges arepresent. In this paper we have proposed alternative fuzzyfunctions to incorporate the effect of linguistic hedges. Ourapproach achieves higher accuracy compared to contempo-rary approaches.

Page 3: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

Applied Computational Intelligence and Soft Computing 3

3. Proposed Opinion Mining System

This section discusses the design of our proposed opinionmining system based on fuzzy linguistic hedges. Our opinionmining system automatically extracts opinionated phrasesfrom unstructured user reviews and classifies them basedon their sentiment. Moreover while assigning a sentimentscore to a phrase, it takes into account the differences inintensity or polarity between opinionated phrases describinga feature “f,” such as “f is good” (no hedge), “f is extremelygood” (concentrating/intensifying hedge), “f is quite good”(dilating/diminishing hedge), and “f is not good” (modify-ing/inverting hedge).

We performed opinionmining on a dataset of online userreviews of various products which was collected using a webcrawler. As depicted in Figure 1, our system consists of threemajor phases. These phases are (1) preprocessing phase, (2)feature generation phase, and (3) fuzzy opinion classificationphase.

3.1. Preprocessing Phase. User-generated online reviews re-quire preprocessing to remove noise [8, 12] before the miningprocess can be performed. This is because these reviewsare usually short informal texts written by nonexperts andfrequently contain mistakes in spelling, grammar, use ofnondictionary words such as abbreviations or acronyms ofcommon terms, mistakes in punctuation, incorrect capital-ization, and so forth.

Since we need to perform POS tagging in the next phaseusing a linguistic parser we perform cleaning tasks suchas spell-error correction using a standard word processor,sentence boundary detection [12], and repetitive punctuationconflation [8]. Syntactically correct sentences end with pre-defined punctuations such as full stop (.), interrogation mark(?), or exclamation mark (!). Sentence boundary detectioninvolves various tasks such as identifying end of sentences onthe basis of correct punctuations and disambiguating the fullstop (.) from decimal points and abbreviated endings (e.g.,“Prof.,” “Pvt.”). We also capitalize the first letter of each newsentence. Bloggers sometimes overuse a punctuation symbolfor emphasis. In such cases, the repetitive symbol is conflatedto a single occurrence [8]. For example, a review posted by ablogger may read as follows:

“the display is somewhat spotty!!!”

After preprocessing, the sentence would read as follows:

“The display is somewhat spotty!”

In the above sentence, the first letter has been capitalizedand the repetitive exclamationmark (“!!!”) has been conflatedso that it occurs only once (“!”).

Thus, preprocessing steps generate sentences which canbe parsed automatically by the linguistic parser. Moreover,product reviews often quote abbreviations and acronymsrelevant to the domain which cannot be found in standardlanguage dictionaries.These cannot be considered as spellingmistakes. So, to make our system more fault tolerant we alsogenerate a domain-specific resource of frequently occurring

Table 1: Partial list of acronyms for “smartphones.”

Acronym Expanded formiOS iPhone operating systemHD video High definition videoLTE Long term evaluationTFT Thin film transistorLCD Liquid crystal displayAUX cable Auxiliary cableHDR High dynamic range

abbreviations and acronyms [47] to augment the standardword dictionary. If a nondictionary word frequently occursin the review dataset, it is examined by a human expert andadded to the domain resource if found relevant. For example,Table 1 shows a partial list of acronyms that were extractedfrom our user review dataset for the “smartphones” product.

3.2. Feature-Set Generation Phase. In this phase, we generatethe feature set for opinion mining from the cleaned reviewsentences generated in the preprocessing phase. Since thespelling and punctuation errors have been removed andsentence boundaries have been clearly identified, we nowparse these sentences using the link grammar parser [35].The parser outputs POS (parts-of-speech) tagged output.Frequently occurring nouns (N) and noun phrases (NP) aretreated as features, while the adjective or adverb modifiersdescribing them are treated as opinion words or descriptors[8]. Additionally we also take into account any linguistichedges preceding the descriptors. For example, consider thefollowing review sentence for a “smartphone” product:

“The call quality is extremely good and navigationis comfortable but the body is somewhat fragile.”

When this review sentence is parsed using the linkgrammar parser, we get an output like

“The call [.n] quality [.n] is [.v] extremely good[.a] and navigation [.n] is [.v] comfortable [.a] butthe body [.n] is [.v] somewhat fragile [.a].”

In the above sentence, [.n] indicates noun, [.a] indicatesadjective, and [.v] indicates verb. Thus, “call quality” canbe interpreted as a noun feature which is described bythe adjective descriptor “good.” Similarly, “navigation” and“body” are features described by the descriptors “comfortable”and “fragile,” respectively. Moreover, the descriptor “good”is preceded by the concentrator hedge “extremely” and thedescriptor “fragile” is preceded by the dilator hedge “some-what,” while the descriptor “comfortable” has no precedinghedge in this particular review sentence.

The mined feature set is tabulated in an FOLH table(feature orientation table with linguistic hedges). Table 2shows the FOLH table entries for some of themost frequentlycommented upon features from the user review set for the“smartphone” product.

The FOLH table stores the product features as wellas the descriptors and modifying hedges corresponding to

Page 4: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

4 Applied Computational Intelligence and Soft Computing

User-generated product reviews

Feature generation phase

Preprocessed and ready-to-parse user reviews

Preprocessing phase

FOLH table

Fuzzy opinion classification phase

Product review websites

Semantic resource

Frequently occurring acronyms andabbreviations

Standard dictionary terms

Web crawler

Opinion mining

correctionSpell-error∙Sentence boundary detection∙Incorrect capitalization rectification∙Repetitive punctuation conflation∙

parserlinguisticusingtaggingspeechofParts∙Frequent feature extraction using multiwords with decomposition strategy∙Mine feature descriptors and linguistic hedges if present∙

phrasesopinionatedExtract∙Apply fuzzy functions for linguistic hedges∙Compute strength and polarity of opinionated phrases∙

Figure 1: System design for opinion mining.

Table 2: Partial feature orientation table with linguistic hedges for smartphone products.

Feature Descriptors with positivepolarity (𝑃 = 1)

Descriptors withnegative polarity(𝑃 = −1)

Linguistic hedges (if present)Modifier(inverter) Concentrator Dilator

Call quality Good, excellent, andsatisfactory Poor, bad Not, never Very, extremely Quite, hardly

Body/design/buildSleek, lightweight, thin,slim, beautiful, sturdy,striking, and gorgeous

Heavy, bulky, andfragile Very, absolutely Somewhat, quite,

and almost

Screen/touchscreen/display/retinadisplay

Nice, great, sensitive,awesome, clear, andbright

Dull, bad, and spotty Not Highly, incredibly Quite

Camera/phone camera/digitalcamera

Awesome, good,superior, and high-resolution

Low resolution,inferior Not Very, positively

User interface Friendly, attractive,good, and lovely Bad, poor Not so Highly, very More or less

Navigation Comfortable, intuitive,easy, and fast

Bad, difficult, slow,and jumpy Very, significantly Quite

the features mined from the training set of reviews. Forexample, consider the first entry in Table 2. It indicates thatthe linguistic variable “call quality” is a smartphone feature.This feature can take on fuzzy values like “good,” “excellent,”or “satisfactory” which have a positive polarity, or it can take

on fuzzy values like “poor” and “bad” which have a negativepolarity. In addition, the intensity of the fuzzy values describ-ing the feature can be increased by concentrator linguistichedges such as “very” and “extremely” or decreased by dilatorlinguistic hedges “somewhat” and “hardly.” The sentiment

Page 5: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

Applied Computational Intelligence and Soft Computing 5

polarity can be reversed by the inverter hedges “not” and“never.”

Frequently occurring semantic word sequences aretreated as multiword features [8, 38–41]. For example, inthe previous example, “call quality” is a multiword feature.We use the multiword with decomposition strategy approach[39], for feature extraction, as it requires lesser pruningand improves classification accuracy compared to Apriori-based and seed-set expansion-based approaches [8]. Theorientation and initial sentiment value of the correspondingdescriptors are determined using the SentiWordNet tool [2,8, 33, 45]. For example, the SentiWordNet score for adjective“fragile” (as used to describe body in the smartphone reviewsentence) is given by the triplet (P: 0, O: 0.375, and N: 0.625)which indicates its positive, objective, and negative score.Since the negative sentiment value is highest in the triplet,“fragile” is assigned a polarity of “−1” that indicates negativeorientation and an initial sentiment intensity value “0.625”which is used in the next phase.

In the FOLH table, semantically similar features areclubbed together by human expert to avoid redundancy andto get a more accurate value of occurrence frequency [8]. It isimportant to note that several acronyms (e.g.,HD video, LCDscreen, etc.) identified during the preprocessing phase areactually multiword features and are thus added to the featureset. The FOLH table is used in the next phase to computethe overall sentiment score of a user review and classify it.Since hedges are generic termswhich could be combinedwithany feature-descriptor pair, a consolidated list of hedges foreach of the three categories (i.e., modifier, concentrator, anddilator) is prepared.

3.3. Fuzzy Opinion Classification Phase. In this phase we per-form fine-grained classification of users’ reviews.The reviewsare classified as very positive, positive, neutral, negative, or verynegative. We classify a new user review based on its fuzzysentiment score whose computation requires three steps:(1) extract features, associated descriptors, and hedges fromthe review based on FOLH table lookup, (2) identify thepolarity and initial value of the feature descriptors based onSentiWordNet score, and (3) calculate overall sentiment scoreusing fuzzy functions to incorporate the effect of linguistichedges.

The first two steps are performed as explained inSection 3.2. As discussed earlier, we consider the SentiWord-Net score of a feature descriptor as its initial fuzzy score 𝜇(𝑠).If the descriptor has a preceding hedge, its modified fuzzyscore is calculated using

𝑓 (𝜇 (𝑠)) = 1 − (1 − 𝜇 (𝑠))𝛿. (1)

Similar to Zadeh’s proposition [10], if the hedge is aconcentrator, we choose 𝛿 = 2 which gives us modified fuzzyconcentrator score as indicated in (2), while if the hedge is

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

Effect of fuzzy linguistic hedges f(𝜇(s))0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

𝜇(s)

fc(𝜇(s))

fd(𝜇(s))

Figure 2: Depiction of linguistic hedges.

a dilator we choose 𝛿 = 1/2 which gives us modified fuzzydilator score as indicated in (3):

𝑓𝑐 (𝜇 (𝑠)) = 1 − (1 − 𝜇 (𝑠))2, (2)

𝑓𝑑 (𝜇 (𝑠)) = 1 − (1 − 𝜇 (𝑠))1/2. (3)

Let us revisit the smartphone review sentence “The body isfragile.”As explained in Section 3.2, the initial sentiment scorefor the descriptor “fragile” obtained using SentiWordNet is𝜇(𝑠) = 0.625. If this descriptor is preceded by a concentratorlinguistic hedge, for example, “very fragile,” then its modifiedfuzzy score is obtained using (2) as 𝑓𝑐(𝜇(𝑠)) = 0.8593.Similarly, if this descriptor is preceded by a dilator linguistichedge, for example, “somewhat fragile,” then its modifiedfuzzy score is obtained using (3) as 𝑓𝑑(𝜇(𝑠)) = 0.3876. Thus,the intensity level of a descriptor is adjusted on the basis ofthe linguistic hedge, whenever such hedges are present in areview sentence.

Figure 2 depicts the effect of applying fuzzy linguistichedges such as concentrators and dilators as per (2) and (3),respectively.

The proposed fuzzy functions have several desirableproperties as listed below.

Property 1. Consider ∀𝑥, 𝑦 ∈ [0, 1] if 𝑥 < 𝑦, 𝑓𝑐(𝑥) < 𝑓𝑐(𝑦)and 𝑓𝑑(𝑥) < 𝑓𝑑(𝑦).

Property 2. Consider ∀𝑥 ∈ [0, 1] 𝑓𝑑(𝑥) < 𝑥 < 𝑓𝑐(𝑥).

Property 3. Consider ∀𝑥, 𝑓(𝑥) ∈ [0, 1].

Page 6: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

6 Applied Computational Intelligence and Soft Computing

Let 𝑥 and 𝑦 indicate the initial sentiment values of afeature descriptor which are to be modified using the pro-posed functions for fuzzy linguistic hedges. From Property 1it becomes clear that both the concentrator and dilatorfuzzy functions are strictly increasing in the interval [0, 1].Moreover, as indicated by Property 2, the dilator functiondecreases the value of the input sentiment variable while theconcentrator function increases its value. Property 3 indicatesthat even after applying the fuzzy functions the output value𝑓(𝑥) remains in the normalized range of [0, 1].

Let 𝑈 represent the complete feature set of a product.Suppose that a user review has comments on a subset 𝐹 of thefeature set. Further, let 𝐻 represent the subset of 𝐹 which ispreceded by concentrator or dilator linguistic hedges, while𝑁 represents the subset of 𝐹 not preceded by these hedges.Thus, 𝐹 ⊂ 𝑈 and 𝐹 = 𝐻 ∪𝑁.

Now, the average fuzzy sentiment score is calculated asshown in

𝛽avg =∑|𝐻|

𝑖=1𝑝𝑖𝑓 (𝜇𝑖 (𝑠)) + ∑

|𝑁|

𝑘=1𝑝𝑘𝜇𝑘 (𝑠)

|𝐹|. (4)

In (4), the first term of the numerator is derived from (1)and accounts for the descriptors which have been modifiedby hedges (concentrator or dilator as applicable), while thesecond term of the numerator accounts for the rest of thedescriptors. The term “𝑝𝑖” indicates the polarity of the 𝑖thfeature descriptor which needs to be looked up from theFOLH table. If the polarity is positive, then its value is +1,and if the polarity is negative, its value is −1. Note that (1)–(3) are only applicable to concentrator and dilator hedges. Ifthere is an “inverter” hedge (e.g., “not”) preceding a featuredescriptor, it is accounted for simply by reversing the value ofpolarity indicator “𝑝𝑖.” Thus an inverter hedge only changesthe orientation of a sentiment phrase without affecting itsmagnitude.

The value of “𝛽avg” calculated using (4) falls in the range[−1, 1]. We further normalize this value using min-max nor-malization [48] to map it to the range [0, 1]. Upon applyingmin-max normalization to “𝛽avg,” we get the normalizedfuzzy bias value “𝛽𝑁” (𝛽𝑁 ∈ [0, 1]) as indicated in

𝛽𝑁 =𝛽avg + 1

2. (5)

Once the value of𝛽𝑁 is computed, the opinion class𝐶 canbe determined using the following rule set:

if 𝛽𝑁 ≥ 0 and 𝛽𝑁 ≤ 0.25, then 𝐶 = “very negative,”elseif 𝛽𝑁 > 0.25 and 𝛽𝑁 < 0.5, then 𝐶 = “negative,” elseif 𝛽𝑁 = 0.5, then 𝐶 = “neutral,” elseif 𝛽𝑁 > 0.5 and 𝛽𝑁 ≤ 0.75, then 𝐶 = “positive,” elseif 𝛽𝑁 > 0.75 and 𝛽𝑁 ≤ 1, then 𝐶 = “very positive.”

The accuracy of the sentiment classification task is veri-fied by comparing the class assigned by our opinion minerwith the star rating assigned by the user to that review. Thenext phase discusses the empirical evaluation of our proposedmethod.

05

101520253035404550

Very positive

Positive Neutral Negative Very negative

Ove

rall

revi

ews (

%)

Review classesSmartphone 1Smartphone 2

Figure 3: Fine-grained classification of user reviews.

4. Empirical Evaluation and Results

This section presents the results of empirical evaluationof our opinion mining strategy. In order to evaluate ourapproach, we used a dataset of over 3000 user-generatedproduct reviews crawled from different websites. The reviewdatabase consisted of user-generated reviews for four typesof products (i.e., tablets, E-book readers, smartphones, andlaptops) of different brands. We selected websites where, inaddition to review text, the users also give a rating (1–5 stars)to their review.We use 30% of the review database as trainingset and 70%as the test set. As explained in Sections 3.1 and 3.2,we first preprocess the review text, extract product features,and generate the FOLH table using the training set of userproduct reviews. Then we perform classification on the testset of reviews using the equations and rule set derived inSection 3.3. The user-assigned 5-star rating is used as a basisto evaluate the accuracy of the proposed opinion miningsystem after the classification is complete. It is important tonote that, unlike text classifiers based on supervised machinelearning methods like Naıve Bayesian or SVM, the feature-based approach does not require a labeled training set forperforming the classification.

Once the reliability of the opinionmining system is estab-lished, it can be used to automatically extract opinionatedsentences from user reviews, perform fine-grained classifi-cation, and generate overall or feature-based comparativeproduct summaries. For example, Figure 3 depicts the over-all fine-grained sentiment classification-based comparativesummary for two models of smartphones. It is clear fromFigure 3 that “smartphone 1” is more popular among userssince it has significantly more positive reviews compared to“smartphone 2.”

Figure 4 depicts the partial feature-based comparisonof two smartphone products, based on some of the mostfrequently commented features wherein granularity of classi-fication was reduced to improve readability. In this example,featurewise comparative product summary was generated byconsidering 𝛽𝑁 ≥ 0.5 as positive and 𝛽𝑁 < 0.5 as negative.

Page 7: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

Applied Computational Intelligence and Soft Computing 7

Table 3: Accuracy of feature-based opinion classification using linguistic hedges.

Dataset

Approaches to feature-based classification using linguistic hedges

Approach 1: valence pointsadjustment approach

Approach 2: Vo and Ock’s fuzzyadjustment functions Approach 3: our proposed method

Binaryclassificationaccuracy (%)

Fine-grainedclassificationaccuracy (%)

Binaryclassificationaccuracy (%)

Fine-grainedclassificationaccuracy (%)

Binaryclassificationaccuracy (%)

Fine-grainedclassificationaccuracy (%)

Tablets 80.87 71.62 87.28 79.73 90.16 87.45E-book readers 76.21 65.89 77.37 72.95 85.44 82.72Smartphones 88.32 74.36 89.16 83.93 92.56 89.95Laptops 84.26 73.32 86.67 79.65 88.67 85.77Average 82.42 71.30 85.12 79.07 89.21 86.47

0

10

20

30

40

50

60

70

80

90

100

Posit

ive r

evie

ws (

%)

Smartphone featuresSmartphone 1Smartphone 2

Call

qual

ity

Body

/des

ign

Touc

hscr

een/

disp

lay

Phon

e cam

era

Use

r int

erfa

ce

Nav

igat

ion

Figure 4: Comparative featurewise summary of two smartphones.

We evaluated the efficiency of our opinionmining systemwhen used for binary as well as fine-grained sentimentclassification. To evaluate the effectiveness of our approachfor incorporating fuzzy linguistic hedges, we compared itwith two other approaches: (1) valence points adjustmentapproach [13, 14] and (2) Vo and Ock’s fuzzy adjustmentapproach [16].

The valence points adjustment approach is a simple hedgeadjustmentmethod thatwas proposed by Polanyi andZaenen[13].Thismethod of valence adjustment has also been used byKennedy and Inkpen in their system for ratingmovie reviews[14]. According to the valence points adjustment approach, allpositive sentiment terms are given an initial or base value of 2[13]. If this term is preceded by a concentrator (intensifier) inthe same phrase its value becomes 3, while if it is precededby a dilator (diminisher) its value becomes 1. Similarly, allnegative sentiment terms are given a base value of −2 whichare adjusted to values −1 and −3 if preceded by diminisher orintensifier hedges, respectively [13, 14].

Vo and Ock have proposed fuzzy adjustment functions[16] for incorporating the effect of hedges. They consideredfive categories of hedges (increase, decrease, invert, invertincrease, and invert decrease) and have proposed fuzzyfunctions for each category [16].

The feature-based product review classification approachis augmented with the three hedge adjustment approachesand their classification accuracies are comparedwhen appliedto the task of binary as well as fine-grained sentimentclassification of product reviews. The results of comparisonof the three approaches are tabulated in Table 3.

It can be observed from Table 3 that all three approachesgive acceptable accuracies (over 82%) when used for binaryclassification of user reviews where sentiment polarity issimply classified as “positive” or “negative.”However, our pro-posed approach performs binary classification with higheraverage accuracy (89%) than the other two approaches.

When used for fine-grained classification, all threeapproaches tend to deteriorate in accuracy. This is under-standable because increasing the number of categories forsentiment classification tends to result in more number oferrors near the boundaries of adjacent classes (e.g., between“very negative” and “negative”). Here again, our proposedapproach proves to be more robust than the other twoapproaches. Empirical results tabulated in Table 3 indicatethat the accuracy of approach 1 decreased by approxi-mately 11% while the accuracy of approach 2 decreased byapproximately 6% over the test dataset when used for fine-grained classification wherein the number of output classeswas increased. In contrast, our proposed approach showsonly a 3% decline in accuracy. Our approach gives highaccuracy of over 86% when used for fine-grained reviewsentiment classification and clearly outperforms the othertwo approaches. Thus, the proposed opinion mining systemsuccessfully incorporates the effect of linguistic hedges andperforms sentiment classification of reviews with acceptableaccuracy.

At present we have considered all online reviews to beof equal authenticity while performing the opinion mining.However, in future we would like to build an enhancedopinion mining system that calculates the weight of anopinion by establishing its authenticity. On some blogs, auser’s initial or base review is often rated by other readers

Page 8: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

8 Applied Computational Intelligence and Soft Computing

by simply clicking on an Agree/Thumbs Up symbol to expressagreement or aDisagree/ThumbsDown symbol to express dis-agreement. Sometimes the comments are further commentedupon by other reviewers, thus forming chains of comments.Performing opinion mining on such chains can establish theauthenticity of the initial review. For example, a sham reviewwritten by a competitor discrediting a rival’s product wouldreceive several “Disagree” comments by other readers. Spoofreviews can also jeopardize the recommendation system ofan online shopping site. In future, we would like to enhanceour opinion mining system to take into consideration suchsecondary comments to refine the weight of the base opinion,which in turn can be used to generate a reliable “recommen-dation system” for online shoppers.

5. Conclusion

Empirical results indicate that the proposed opinion miningsystem performs both binary and fine-grained sentimentclassifications of user reviews with high accuracy. The pro-posed functions for emulating fuzzy linguistic hedges couldbe successfully incorporated into the sentiment classificationtask. Moreover, our approach significantly outperforms othercontemporary approaches especially when the granularity ofthe sentiment classification task is increased.

In future, we would like to build an advanced opinionmining system capable of rating the authenticity of a userreviewbased onmining opinion threads of secondary review-ers.

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper.

References

[1] M.Hu andB. Liu, “Mining and summarizing customer reviews,”in Proceedings of the 10th ACM SIGKDD International Confer-ence on Knowledge Discovery and Data Mining (KDD ‘04), pp.168–177, August 2004.

[2] M. A. Jahiruddin, M. N. Doja, and T. Ahmad, “Featureand opinion mining for customer review summarization,” inProceedings of the 3rd International Conference on PatternRecognition and Machine Intelligence (PReMI ‘09), vol. 5909 ofLecture Notes in Computer Science, pp. 219–224, 2009.

[3] S. Shi and Y. Wang, “A product features mining method basedon association rules and the degree of property co-occurrence,”in Proceedings of the International Conference on ComputerScience and Network Technology (ICCSNT ’11), vol. 2, pp. 1190–1194, December 2011.

[4] S. Huang, X. Liu, X. Peng, and Z. Niu, “Fine-grained productfeatures extraction and categorization in reviews opinion min-ing,” in Proceedings of the 12th IEEE International Conference onData Mining Workshops (ICDMW ’12), pp. 680–686, 2012.

[5] C.-P.Wei, Y.-M.Chen,C.-S. Yang, andC.C. Yang, “Understand-ing what concerns consumers: a semantic approach to productfeature extraction from consumer reviews,” Information Systemsand e-Business Management, vol. 8, no. 2, pp. 149–167, 2010.

[6] A.-M. Popescu and O. Etzioni, “Extracting product featuresand opinions from reviews,” in Proceedings of Human LanguageTechnology Conference and Conference on Empirical Methods inNatural Language Processing (HLT/EMNLP ’05), pp. 339–346,October 2005.

[7] W. Y. Kim, J. S. Ryu, K. I. Kim, and U. M. Kim, “A method foropinion mining of product reviews using association rules,” inACM International Conference on Interaction Sciences: Informa-tion Technology, Culture and Human (ICIS ’09), pp. 270–274,November 2009.

[8] M. K. Dalal and M. A. Zaveri, “Semisupervised learning basedopinion summarization and classification for online productreviews,” Applied Computational Intelligence and Soft Comput-ing, vol. 2013, Article ID 910706, 8 pages, 2013.

[9] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up? Sentimentclassification using machine learning techniques,” in Proceed-ings of the International Conference on Empirical Methods inNatural Language Processing (EMNLP ’02), pp. 79–86, 2002.

[10] L. A. Zadeh, “The concept of a linguistic variable and itsapplication to approximate reasoning-II,” Information Sciences,vol. 8, no. 4, part 3, pp. 301–357, 1975.

[11] V. N. Huynh, T. B. Ho, and Y. Nakamori, “A parametricrepresentation of linguistic hedges in Zadeh’s fuzzy logic,”International Journal of Approximate Reasoning, vol. 30, no. 3,pp. 203–223, 2002.

[12] L. Dey and Sk. M. Haque, “Opinion mining from noisy textdata,” International Journal on Document Analysis and Recog-nition, vol. 12, no. 3, pp. 205–226, 2009.

[13] L. Polanyi and A. Zaenen, “Contextual valence shifters,” inComputing Attitude and Affect in Text: Theory and Applications,vol. 20 ofThe Information Retrieval Series, pp. 1–10, 2006.

[14] A. Kennedy and D. Inkpen, “Sentiment classification of moviereviews using contextual valence shifters,” Computational Intel-ligence, vol. 22, no. 2, pp. 110–125, 2006.

[15] M. Taboada, J. Brooke, M. Tofiloski, K. Voll, and M. Stede,“Lexicon-basedmethods for sentiment analysis,”ComputationalLinguistics, vol. 37, no. 2, pp. 267–307, 2011.

[16] A.-D. Vo and C.-Y. Ock, “Sentiment classification: a combina-tion of PMI, sentiWordNet and fuzzy function,” in Proceedingsof the 4th International Conference on Computational CollectiveIntelligence Technologies and Applications (ICCCI ’12), vol. 7654,part 2 of Lecture Notes in Computer Science, pp. 373–382, 2012.

[17] S. Nadali, M. A. A. Murad, and R. A. Kadir, “Sentimentclassification of customer reviews based on fuzzy logic,” inProceedings of the International Symposium on InformationTechnology (ITSim’ 10), pp. 1037–1044, mys, June 2010.

[18] S. Kim, K. Han, H. Rim, and S. H. Myaeng, “Some effectivetechniques for naıve bayes text classification,” IEEETransactionson Knowledge and Data Engineering, vol. 18, no. 11, pp. 1457–1466, 2006.

[19] Z.-Q. Wang, X. Sun, D.-X. Zhang, and X. Li, “An optimalSVM-based text classification algorithm,” in Proceedings of theInternational Conference on Machine Learning and Cybernetics,pp. 1378–1381, August 2006.

[20] Z. Wang, Y. He, and M. Jiang, “A comparison among threeneural networks for text classification,” in Proceedings of the8th International Conference on Signal Processing (ICSP ’06), pp.1883–1886, November 2006.

[21] D. Isa, L. H. Lee, V. P. Kallimani, and R. Rajkumar, “Textdocument preprocessing with the bayes formula for classifica-tion using the support vector machine,” IEEE Transactions on

Page 9: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

Applied Computational Intelligence and Soft Computing 9

Knowledge and Data Engineering, vol. 20, no. 9, pp. 1264–1272,2008.

[22] R. D. Goyal, “Knowledge based neural network for text classi-fication,” in Proceedings of the IEEE International Conference onGranular Computing (GrC ’07), pp. 542–547, November 2007.

[23] H. P. Luhn, “The automatic creation of literature abstracts,” IBMJournal of Research and Development, vol. 2, pp. 159–165, 1958.

[24] H. P. Edmundson, “New methods in automatic extracting,”Journal of the ACM, vol. 16, no. 2, pp. 264–285, 1969.

[25] C.-Y. Lin and E. H. Hovy, “Manual and automatic evaluationof summaries,” in Proceedings of the ACL-02 Workshop onAutomatic Summarization, vol. 4, pp. 45–51, 2002.

[26] C.-Y. Lin and E. H. Hovy, “Identifying topics by position,” inProceedings of the 5h Conference on Applied Natural LanguageProcessing, pp. 283–290, 1997.

[27] R. Barzilay andM. Elhadad, in Proceedings of the ACLWorkshopon Intelligent Scalable Text Summarization, Using lexical chainsfor text summarization, Ed., pp. 10–17, 1997.

[28] D. Marcu, “Improving summarization through rhetorical pars-ing tuning,” in Proceedings of the 6th Workshop on Very LargeCorpora, pp. 206–215, 1998.

[29] J. Carbonell and J. Goldstein, “The use of MMR, diversity-based reranking for reordering documents and producing sum-maries,” in Proceedings of the ACM 21st International Conferenceon Research and Development in Information Retrieval, pp. 335–336, 1998.

[30] J. Lin, N. Madnani, and B. J. Dorr, “Putting the user in theloop: interactivemaximalmarginal relevance for query-focusedsummarization,” in Proceedings of the Annual Conference of theNorth American Chapter of the Association for ComputationalLinguistics (HLT’10), pp. 305–308, June 2010.

[31] P.-Y. Zhang and C.-H. Li, “Automatic text summarization basedon sentences clustering and extraction,” in Proceedings of the2nd IEEE International Conference on Computer Science andInformation Technology (ICCSIT ’09), pp. 167–170, August 2009.

[32] M. Hu and B. Liu, “Mining opinion features in customerreviews,” in Proceedings of the 19th National Conference onArtifical Intelligence (IAAI ’04), pp. 755–760, San Jose, Calif,USA.

[33] L. Zhao and C. Li, “Ontology based opinion mining for moviereviews,” in Proceedings of the 3rd International Conference onKnowledge Science, Engineering and Management, pp. 204–214,2009.

[34] P. D. Turney, “Thumbs Up or Thumbs Down?: semanticorientation applied to unsupervised classification of reviews,”in Proceedings of the 40th Annual Meeting on Association forComputational Linguistics, pp. 417–424, 2002.

[35] D. Sleator and D. Temperley, “Parsing english with a linkgrammar,” in Proceedings of the 3rd International Workshop onParsing Technologies, pp. 1–14, 1993.

[36] D. Klein and C. D. Manning, “Fast exact inference with afactored model for natural language parsing,” in Advances inNeural Information Processing Systems (NIPS ’02), pp. 3–10,MITPress, Cambridge, Mass, USA, 2003.

[37] R. Agrawal and R. Srikant, “Fast algorithm for mining associ-ation rules,” in Proceedings of the 20th International Conferenceon Very Large Data Bases, pp. 487–499, 1994.

[38] K. W. Church and P. Hanks, “Word association norms, mutualinformation and lexicography,” Computational Linguistics, vol.16, no. 1, pp. 22–29, 1990.

[39] W. Zhang, T. Yoshida, and X. Tang, “Text classification usingmulti-word features,” in Proceedings of the IEEE InternationalConference on Systems, Man, and Cybernetics (SMC ’07), pp.3519–3524, October 2007.

[40] M. K. Dalal and M. A. Zaveri, “Automatic text classification ofsports blog data,” in Proceedings of the Computing, Communi-cations and Applications Conference (ComComAp ’12), pp. 219–222, January 2012.

[41] W. Zhang, T. Yoshida, andX. Tang, “TFIDF, LSI andmulti-wordin information retrieval and text categorization,” in Proceedingsof the IEEE International Conference on Systems, Man andCybernetics (SMC ’08), pp. 108–113, October 2008.

[42] V. Hatzivassiloglou and K. Mckeown, “Predicting the semanticorientation of adjectives,” in Proceedings of the 35th AnnualMeeting of the Association for Computational Linguistics andEighth Conference of the European Chapter of the Association forComputational Linguistics (ACL ’98), pp. 174–181, 1998.

[43] P. D. Turney and M. L. Littman, “Measuring praise and criti-cism: inference of semantic orientation from association,”ACMTransactions on Information Systems, vol. 21, no. 4, pp. 315–346,2003.

[44] G. A.Miller, “WordNet: a lexical database for English,”Commu-nications of the ACM, vol. 38, no. 11, pp. 39–41, 1995.

[45] S. Baccianella, A. Esuli, and F. Sebastiani, “SentiWordNet 3.0: anenhanced lexical resource for sentiment analysis and opinionmining,” in Proceedings of the 7th International Conference onLanguage Resources and Evaluation (LREC ’10), pp. 2200–2204,2010.

[46] T. Zamali, M. A. Lazim, and M. T. A. Osman, “Sensitivity anal-ysis using fuzzy linguistic hedges,” in Proceedings of the IEEESymposium on Humanities, Science and Engineering Research,pp. 669–672, 2012.

[47] M. K. Dalal and M. A. Zaveri, “Automatic classification ofunstructured blog text,” in Journal of Intelligent Learning Sys-tems and Applications, vol. 5, no. 2, pp. 108–114, 2013.

[48] J. Han and M. Kamber, Data Mining: Concepts and Techniques,chapter 2, Elsevier, 2nd edition, 2006.

Page 10: Research Article Opinion Mining from Online User Reviews ...downloads.hindawi.com/journals/acisc/2014/735942.pdf · Research Article Opinion Mining from Online User Reviews Using

Submit your manuscripts athttp://www.hindawi.com

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Applied Computational Intelligence and Soft Computing

 Advances in 

Artificial Intelligence

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporation

http://www.hindawi.com Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014