Grammatical structures for word-level sentiment detection.

57
Grammatical structures for word-level sentiment detection

Transcript of Grammatical structures for word-level sentiment detection.

Page 1: Grammatical structures for word-level sentiment detection.

Grammatical structures for word-level sentiment

detection

Page 2: Grammatical structures for word-level sentiment detection.

OUTLINE

Abstract Introduction Syntactic relatedness tries Encoding SRTs as a factor graph Data source Experiments and discussion Conclusions and future work

Page 3: Grammatical structures for word-level sentiment detection.

Abstract

Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections.

Page 4: Grammatical structures for word-level sentiment detection.

Abstract

This is because of a lack of both (1) annotated data at the word level (2) algorithms that can leverage

syntactic information in a principled way

Page 5: Grammatical structures for word-level sentiment detection.

Introduction

Effective sentiment analysis for texts depends on the ability to extract who (source) is saying what (target).

Page 6: Grammatical structures for word-level sentiment detection.

Introduction

Lloyd Hession, chief security officer at BT Radianz in New York, said that virtualization also opens up a slew of potential network access control issues.

Page 7: Grammatical structures for word-level sentiment detection.

Introduction

Source: Lloyd Hession, BT Radianz, and New

York. Traget:

include all the sources but also “virtualization”, “network

access control”,“network”, and so on Opinion

Page 8: Grammatical structures for word-level sentiment detection.

Introduction

Searching for all triples {source, target, opinion} in this sentence

We call opinion mining “fine-grained” when it retrieves many different {source, target, opinion} triples per document.

Page 9: Grammatical structures for word-level sentiment detection.

Introduction

Lloyd Hession, chief security officer at BT Radianz in New York, said that virtualization also opens up a slew of potential network access control issues.

“Lloyd Hession” is the source of an opinion, “slew of network issues,” about a target, “virtualization”.

Page 10: Grammatical structures for word-level sentiment detection.

Introduction

We use a sentence’s syntactic structure to build a probabilistic model that encodes whether a word is opinion bearing as a latent variable.

Page 11: Grammatical structures for word-level sentiment detection.

Syntactic relatedness tries

Use the Stanford Parser to produce a dependency graph and consider the resulting undirected graph structure over words, and then construct a trie for each possible word in sentence.

Page 12: Grammatical structures for word-level sentiment detection.

Encoding Dependencies in an SRT

SRTs enable us to encode the connections between a possible target word(a object of interest) and a set of related objects. SRTs are data structures consisting of

nodes and edges.

Page 13: Grammatical structures for word-level sentiment detection.

Encoding Dependencies in an SRT

The object of interest is the opinion target, defined as the SRT root node.

Each SRT edge corresponds to a grammatical relationship between words and is labeled with that relationship.

Page 14: Grammatical structures for word-level sentiment detection.

Encoding Dependencies in an SRT

We use the notation to signify that node a has the relationship (“role”) R with b.

Page 15: Grammatical structures for word-level sentiment detection.

Using sentiment flow to label an SRT

goal: to discriminate between parts of the

structure that are relevant to target-opinion word relations and those that are not

Page 16: Grammatical structures for word-level sentiment detection.

Using sentiment flow to label an SRT

We use the term sentiment flow for relevant sentiment-bearing words in the SRT and inert for the remainder of the sentence.

Page 17: Grammatical structures for word-level sentiment detection.

Using sentiment flow to label an SRT

MPQA sentence: The dominant role of the European

climate protection policy has benefits for our economy.

Page 18: Grammatical structures for word-level sentiment detection.
Page 19: Grammatical structures for word-level sentiment detection.
Page 20: Grammatical structures for word-level sentiment detection.

Using sentiment flow to label an SRT

Suppose that an annotator decides that “protection” and “benefits” are directly expressing an opinion about the policy, but “dominant” is ambiguous (it has some negative connotations) protection, benefits : flow dominant : inert

Page 21: Grammatical structures for word-level sentiment detection.
Page 22: Grammatical structures for word-level sentiment detection.

Invariant

Invariant: no node descending from a node labeled

inert can be labeled as a part of a sentiment flow.

Page 23: Grammatical structures for word-level sentiment detection.

Invariant

flow label switched to inert will require all the descendents of that particular node to switch to inert.

Page 24: Grammatical structures for word-level sentiment detection.
Page 25: Grammatical structures for word-level sentiment detection.

Invariant

Inert label switched to flow will require all of the ancestors of that node to switch to flow .

Page 26: Grammatical structures for word-level sentiment detection.
Page 27: Grammatical structures for word-level sentiment detection.

Encoding SRTs as a factor graph

In this section, we develop supervised machine learning tools to produce a labeled SRT from unlabeled, held-out data in a single, unified model.

Page 28: Grammatical structures for word-level sentiment detection.

Sampling labels

A factor graph is a representation of a joint probability distribution in the form of a graph with two types of vertices variable vertices: Z = {z1 . . . zn } factor vertices: F = {f1 . . . fm} Y = {Y1,Y2…Ym} -> subset of Z

Page 29: Grammatical structures for word-level sentiment detection.

Sampling labels

We can then write the relationship as follows:

Page 30: Grammatical structures for word-level sentiment detection.

Sampling labels

Our goal is to discover the values for the variables that best explain a dataset.

More specifically, we seek a posterior distribution over latent variables that partition words in a sentence into flow and inert groups.

Page 31: Grammatical structures for word-level sentiment detection.
Page 32: Grammatical structures for word-level sentiment detection.

Sampling labels

g represents a function over features of the given node itself, or “node features.”

f represents a function over a bigram of features taken from the parent node and the given node, or “parent-node” features.

h represents a function over a combination features on the node and features of all its children, or “node-child” features.

Page 33: Grammatical structures for word-level sentiment detection.

Sampling labels

In addition to the latent value associated with each word, we associate each node with features derived from the dependency parse the word from the sentence itself the part-of-speech (POS) tag assigned by

the Stanford parser the label of the incoming dependency

edge

Page 34: Grammatical structures for word-level sentiment detection.

Sampling labels

scoring function with contributions of each node to the score(label | node):

Page 35: Grammatical structures for word-level sentiment detection.

Data source

Sentiment corpora with sub-sentential annotations such as Multi-Perspective Question-Answering

(MPQA) corpus J. D. Power and Associates (JDPA) blog

post corpus But most of these annotations are

phrase level

Page 36: Grammatical structures for word-level sentiment detection.

Data source

We developed our own annotations to discover such distinctions.

Page 37: Grammatical structures for word-level sentiment detection.

Information technology business press

Focus on a collection of articles from the IT professional magazine, Information Week, from the years 1991 to 2008.

Page 38: Grammatical structures for word-level sentiment detection.

Information technology business press

This consists of 33K articles including news bulletins and opinion columns.

Page 39: Grammatical structures for word-level sentiment detection.

Information technology business press

IT concept target list (59 terms) comes from our application.

Page 40: Grammatical structures for word-level sentiment detection.

Crowdsourced annotation process

There are 75K sentences with IT concept mentions, only a minority of which express relevant opinions(about 219).

Page 41: Grammatical structures for word-level sentiment detection.

Crowdsourced annotation process

We engineered tasks so that only a randomly-selected five or six words(excluded all function words) appear highlighted for classification(called “highlight group”) in order to limit annotator boredom Three or more users annotated each

highlight group

Page 42: Grammatical structures for word-level sentiment detection.

Crowdsourced annotation process

Lloyd Hession, chief security officer at BT Radianz in New York, said that virtualization also opens up a slew of potential network access control issues.

Page 43: Grammatical structures for word-level sentiment detection.

Crowdsourced annotation process

highlighted word class: “positive”, “negative”

-> “opinion-relevant” “not opinion-relevant”, “ambiguous”

-> “not opinion-relevant”

Page 44: Grammatical structures for word-level sentiment detection.

Crowdsourced annotation process

Annotators labeled 700 highlight groups Training groups: 465 SRTs Testing groups: 196 SRTs

Page 45: Grammatical structures for word-level sentiment detection.

Experiments and discussion

We run every experiment (training a model and testing on held-out data) 10 times and take the mean average and range of all measures.

F-measure is calculated for each run and averaged post hoc.

Page 46: Grammatical structures for word-level sentiment detection.

Experiments

baseline system is the initial setting of the labels for the sampler: uniform random assignment of flow

labels, respecting the invariant.

Page 47: Grammatical structures for word-level sentiment detection.

Experiments

This leads to a large class imbalance in favor of inert. switch to inert converts all nodes

downstream from the root to convert to inert

switch to inert converts all nodes downstream from the root to convert to inert

Page 48: Grammatical structures for word-level sentiment detection.

Discussion

We present a sampling of possible feature-factor combinations in table 1 in order to show trends in the performance of the system.

Page 49: Grammatical structures for word-level sentiment detection.
Page 50: Grammatical structures for word-level sentiment detection.

Discussion

Though these invariant-violating models are unconstrained in the way they label the graph, our invariant-respecting models still outperform them and our SRT invariant allows us to achieve better performance and will be more useful to downstream tasks.

Page 51: Grammatical structures for word-level sentiment detection.

Manual inspection

One pattern that prominently stood out in the testing data with the full-graph model was the misclassification of flow labels as inert in the vicinity of Stanford dependency labels such as conj_and. There are many fewer incidents of

inert labels being classified as flow.

Page 52: Grammatical structures for word-level sentiment detection.

Manual inspection

This problem could be resolved by making some features transparent to the learner.

Page 53: Grammatical structures for word-level sentiment detection.

Manual inspection

For example, if node q has an incoming conj and dependency edge label, then q’s parent could also be directly connected to q’s children.

Page 54: Grammatical structures for word-level sentiment detection.

Paths found

But Microsoft’s informal approach may not be enough as the number of blogs at the company grows, especially since the line between “personal” Weblogs and those done as part of the job can be hard to distinguish.

Page 55: Grammatical structures for word-level sentiment detection.

Paths found

In this case, the Turkers decided that “distinguish” expressed a negative opinion about blogs But actually is the modifier “hard”

makes it negative.

Page 56: Grammatical structures for word-level sentiment detection.

Paths found

In this path, “blog” and “distinguish” are both connected to one another by “hard”, giving “distinguish” its negative spin.

Page 57: Grammatical structures for word-level sentiment detection.

Conclusions and future work