HumanDigital Sentiment Indication vs Insight

25
Research Paper First Published: November 2011 Sentiment Indication vs. Insight In cooperation with Liam O’Neill, Master’s Graduate at London School of Economics and Political Science

description

This paper applies social science methodology to brand sentiment analysis in order to discover whether additional value can be added to the industry standard metrics of ‘Positive, Negative, or Neutral’ in analysing sentiment towards brands. The aim of this paper is to deliver deeper insight from a dataset of online brand sentiment, as opposed to the indicative offerings of basic ‘Positive, Negative, or Neutral’ metrics.

Transcript of HumanDigital Sentiment Indication vs Insight

 

     

Research  Paper  First  Published:  November  2011  

 Sentiment  -­‐  

Indication  vs.  Insight        

In  cooperation  with  Liam  O’Neill,  Master’s  Graduate  at  London  School  of  Economics  and  Political  Science  

   

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  2  

Executive  Summary   3  

Introduction   5  

Why  does  online  sentiment  analysis  matter?   5  

Reliance  on  Basic  Metrics   5  

Problems  with  social  media  monitoring  tools   6  

Aims  of  this  paper   7  

Methodology   7  

The  Subject  –  BlackBerry  PlayBook   7  

Quantitative  Content  Analysis   7  

Procedure   8  

Coding  variables  –  Tonal  Sentiment   8  

Coding  variables  –  Topics   8  

Coding  variables  –  Behavioural  Sentiment   9  

Findings  and  Analysis   11  

What  ‘Positive,  Negative,  Neutral’  and  Topic  analysis  models  tells  us   11  

Insight  through  deep  sentiment  analysis:  Adding  Value   11  

Behavioural  Sentiment  Variables   14  

1.  Experience:  Direct  Experience  vs.  Indirect  Experience   14  

Analysis   15  

2.  Industry  Context:  Stand  Alone  vs.  Comparative   17  

Analysis   18  

3.  Reasoning:  Rational  vs.  Emotional   19  

Analysis   20  

4.  Composure:  Passive  vs.  Aggressive   21  

Analysis   22  

Conclusion   23  

Bibliography   25  

   

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  3  

Executive  Summary    

This  paper  applies  social  science  methodology  to  brand  sentiment  analysis   in  order  

to  discover  whether  additional  value  can  be  added  to  the  industry  standard  metrics  

of  ‘Positive,  Negative,  or  Neutral’  in  analysing  sentiment  towards  brands.  The  aim  of  

this  paper   is  to  deliver  deeper   insight  from  a  dataset  of  online  brand  sentiment,  as  

opposed  to  the  indicative  offerings  of  basic  ‘Positive,  Negative,  or  Neutral’  metrics.    

Using   Quantitative   Content   Analysis   and   an   inductive   analysis   framework,   new  

variables   were   used   to   analyse   consumer   behaviour   and   the   drivers   of   brand  

sentiment   expressed   in   a   digital   environment.   The  BlackBerry   PlayBook   tablet  was  

used   as   the   subject   of   the   study   due   to   its   position   as   one   of   several   emerging  

products   in   the   tablet   market.   This   status   meant   that   an   appropriate   volume   of  

online  commentary  that  could  be  collected  for  analysis.    

The   basic,   top-­‐level  metrics   of   ‘Positive,   Negative,   or   Neutral’   were:   48%   Positive,  

38%  Negative,  and  14%  Neutral.  This  output  is  typical  of  analysis  from  social  media  

research  agencies  and  tools.  While  the  metric  provides  an  indication  in  regards  to  a  

‘temperature   check’   on   what   the   audience   is   saying   online,   its   strategic   value   is  

limited  and  it  provides  little  direction  on  behaviour.  It  is  an  indication  rather  than  an  

insight.  

This   paper   demonstrates   how   valuable   insight   can   be   derived   through   applying   a  

social   science   methodology   to   a   dataset   of   user-­‐generated   comments   relating   to  

sentiment.      

Implementing  the  analysis  developed  for  this  research,  this  method  allows  for:  

• Identifying   certain   drivers   of   sentiment.   In   this   case,   negative   sentiment   was  

based  on  preconceived  views  of  the  product  rather  than  from  direct,  hands  on  

experience   with   the   product.   In   contrast,   Positive   sentiment   towards   the  

product  was  more   likely  to  be   informed  by  Direct  Experience  with  the  product  

through  purchase  or  a  hands-­‐on  trial.  

 

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  4  

• Insight   into   the   spaces   where   there   are   disingenuous   and   un-­‐informed  

conversations.  For  instance  35%  of  all  negative  comments  in  the  landscape  are  

attributed   to   ‘Non-­‐Specific’   comments.  Furthermore,  4  out  of  every  5  of   these  

are  based  on  no  experience  with  the  product.  

• Insight   into   the   spaces   where   rational   and   informed   conversations   are   taking  

place.   For   instance,   although   only   12%   of   the   Share   of   Voice   is   driven   by   the  

‘User  Experience’  of  the  product,  2  in  every  3  comments  about  it  is  positive,  and  

9  out  of  10  are  based  on  Direct  Experience  of  the  product.    

• Insight   into   to   what   the   people   are   talking   about,   how   they   are   expressing  

sentiment,   and   how   sentiment   differs   within   topic   groups.   When   compared  

side-­‐by-­‐side,   the   passionate   commentary   shows   that   the   audience   is  

predominantly   negative   for   all   topics   except   UX,   where   it   is   overwhelmingly  

positive.    

Given   the  mass   participation   across   the   social   web   we   believe   that   if   the   correct  

methodology   is   applied   via   a   blend   of   Qualitative   and   Quantitative   metrics,  

interested   parties   are   now   able   to   understand   target   audiences   better   than   ever  

before.  The  above  findings  are  some  examples  of  that  insight.  

 

 

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  5  

Introduction  

Why  does  online  sentiment  analysis  matter?    

An   important   part   of   market   research   analysis   and   strategy   has   always   been   to  

discover   what   a   target   audience   is   thinking.   Sentiment   analysis   involves  

“computational   treatment  of  opinion,   sentiment,  and  subjectivity   in   text,”   (Pang  &  

Lee,   2008),   and   has   seen   a   recent   surge   in   interest   due   to   increased   access   to  

consumer   opinion   since   the   advent   and   rise   of   social   media.   As   Zabin   &   Jefferies  

point  out:  “consumers  have  at  their  disposal  a  soapbox  of  unprecedented  reach  and  

power  by  which  to  share  their  brand  experiences  and  opinions,  positive  or  negative,  

regarding   any   product   or   service,”   (2008).   Naturally,   companies   are   seeking   to  

harness  these  user-­‐generated  opinions  as  they  can  have  an  influence  on  shaping  the  

opinions   of   other   consumers,   and   “ultimately   their   brand   loyalties,   their   purchase  

decisions,  and  their  own  brand  advocacy,”  (Ibid).  

Much   analysis   has   been   applied   to   the   range   of   opinions   expressed   through  web-­‐

based  user-­‐generated  content  in  attempts  to  qualitatively  and  quantitatively  assess  

audience   thoughts   and   behaviours.   However,   the   capacity   to   analyse   online  

sentiment  towards  brand  has  in  many  cases  failed  to  reach  its  potential,  due  to  the  

use  of   basic  metrics   and   an  over-­‐reliance  on   automated  data   collection   tools  with  

limited  capacity  for  insight  and  inconsistent  outputs.  

A  research  paper  from  Human  Digital  in  July  2010  found  that,  when  tested  side-­‐by-­‐

side,   automated   social   media   monitoring   tools   offer   vastly   different   results   for  

volumes   of   data,   platform   categorization,   and   crucially,   brand   sentiment   analysis.  

This  highlights  the  difficulty  of  accurately  engaging  in  sentiment  with  an  algorithm.    

Reliance  on  Basic  Metrics    

Understanding  of  social  media  behaviour  and  how  to  interpret  the  volumes  of  data  

through  meaningful  metrics  is  an  underdeveloped  industry  (Schweidel,  et  al,  2001).  

As  a  result  of  this,  many  leading  firms  “rely  on  simplified  measures  such  as  the  total  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  6  

volume  of  posted  comments  or   the  average  sentiment  expressed  across  all  posted  

comments,”  (ibid,  p.2).    

A  common  output  of  sentiment  is  the  percentage  attribution  of  the  labels  ‘Positive,  

Negative,  or  Neutral’  to  a  dataset.  Dillon,  et  al  (2001)  found  that  these  measures  of  

‘brand  ratings’  are  problematic  because  they  do  not  consider  that  brand  sentiment  

can  be  informed  by  different  factors.  For  instance:  “Brand-­‐specific  associations  refer  

to   features,   attributes,   or   benefits   that   consumers   link   to   a   brand   and   that  

differentiate   it   from   the   competition,”   while   “general   brand   impressions   refer   to  

general   impressions  about  the  brand  that  are  based  on  a  more  holistic  view  of  the  

brand,”   (p.1).  Simply  testing  for   ‘Positive,  Negative,  or  Neutral’  sentiment  does  not  

account  for  any  drivers  of  sentiment,  nor  does   it  consider  any  context  surrounding  

and   influencing  sentiment.   It   therefore  offers  only  a  shallow   insight   into  the  actual  

state  of  brand  sentiment.    

Problems  with  social  media  monitoring  tools    

There  has  been  a  rise  in  search-­‐based  services  that  monitor  online  conversations  and  

automatically  analyse  online  sentiment  via  keywords  (Pang  &  Lee,  2008).  However,  

automated   tools   fail   to   deliver   on   their   promise   to   filter   and   analyse   social  media  

data   (Human   Digital,   2010).   When   tested   side-­‐by-­‐side,   these   automated   services  

offer  vastly  differing  results  for  brand  sentiment,  further  highlighting  the  difficulty  of  

accurately  engaging  in  sentiment  with  an  algorithm.  

The   process   of   automating   brand   sentiment   with   search   algorithms   has   several  

limitations   (Pang  &   Lee,   2008).     These   include   the   fact   that   sarcasm  and   irony  are  

very  difficult  to  detect,  difficulties  in  correctly  identifying  negative  sentiment  due  to  

the  complexities  of  negation  terms  in  sentence  structure  (see  Pang  &  Lee,  2008,  p.36  

for   a   detailed   analysis   of   this   fault),   and   accurately   identifying   the   relationship  

between   topic   and   sentiment   (ibid).   Also   of   note   is   that   if   two   different   online  

reviewers  have  the  same  sentiment,  they  may  have  different  reasons  for  doing  so  (as  

also  identified  by  Dillon,  2001).  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  7  

Taking  into  account  these  limitations  of  automated  services,  there  remains  a  need  to  

collect   data   in   a   more   robust   fashion   and   improve   on   the   metrics   and   analysis  

methods  that  currently  exist  in  the  industry.  

Aims  of  this  paper    

This  paper  will  apply  a  social  science  methodology  to  brand  sentiment  in  an  attempt  

to   offer   valuable   insight   into   online   brand   sentiment.   By   offering   insight   over  

indication,  this  paper  seeks  to  provide  the  kind  of  results  that  can  potentially  impact  

the  top-­‐level  strategy  of  an  organisation.  

The  theoretical  research  question  that  this  paper  seeks  to  address  is  ‘Given  the  rise  

of  online  participation,  what  can  the  application  of  a  social  science  methodology  to  

the  issue  of  sentiment  provide?’.    

This  study  seeks  to  address  this  question  by  applying  a  Quantitative  Content  Analysis  

methodology  to  a  dataset  of  User  Generated  comments  on  a  particular  product  (the  

BlackBerry  PlayBook).  The  sub-­‐questions   that   this   research  asks  are:   ‘What  are   the  

factors   that   influence   brand   sentiment?’   and   ‘How   can   new   variables   and   analysis  

add  value  to  sentiment  analysis  in  a  way  that  can  inform  communications  strategy?’  

Methodology    

The  Subject  –  BlackBerry  PlayBook    

The  Research-­‐In-­‐Motion   (RIM)  Blackberry   PlayBook   is   a   7”   tablet   launched   in   June  

2011.   It   is   the   first  attempt  of  RIM  to  break   into   the  growing   tablet  market   that   is  

dominated  by  Apple’s  iPad.  This  product  was  deemed  appropriate  for  this  study  as  it  

is   a   Fast   Moving   Consumer   Good   that   was   likely   to   generate   substantial   online  

discussion  from  which  a  dataset  could  be  drawn  from.    

Quantitative  Content  Analysis    

A  Quantitative  Content  Analysis  (QCA)  methodology  was  adopted  for  this  research.  

QCA   enables   research   on   a   large   sample   of   texts,   making   these   texts   speak   as   a  

group   to   answer   research   questions.   It   allows   for   “the   objective,   systematic,   and  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  8  

quantitative   description   of   the   manifest   content   of   communication”   (Berelson  

1952:18).   This   makes   it   an   appropriate   method   for   researching   online   brand   and  

product   sentiment   because   it   can   provide   a   snapshot   of   opinion  within   a   body   of  

online   textual   discussions.   It   allows   researchers   to  make   inferences   from   texts   by  

“objectively   and   systematically   identifying   specified   characteristics   of   messages.  

(Holsti  1969:14).  

Procedure    

QCA  provides  “replicable  and  valid  inferences  by  systematically  identifying  specified  

characteristics  of  the  message’  (Krippendorf  1980:21).  It  requires  the  collection  of  a  

sample   of   texts,   which   can   then   be   coded   against   characteristics,   and   then  

quantitatively   analysed.   To   that   end,   a   sample   of   500   user   conversations   on   the  

BlackBerry   PlayBook   were   drawn   from   the   following   online   channels:   Twitter,  

Engadget,  BDG,  The  Guardian  online,  The  Telegraph  online,  ZDNet,  PC  Mag,  PC  Pro,  

PC   World,   and   Gizmodo.   A   sample   was   drawn   from   various   platforms   due   to  

Schweidel   et   al’s   (2011)   observation   that   different   opinions   exist   across   different  

platforms.    

Coding  variables  –  Tonal  Sentiment    

The   industry-­‐standard   Positive,   Neutral   and   Negative   sentiments   were   coded   to  

provide  an  introductory  insight  into  audience  perceptions.  

Coding  variables  –  Topics      

In   order   to   gain   insight   into   the   contexts   that   surround   sentiment,   the   data   was  

coded   by   Topic:   the   aspect   of   the   product   that   the   sentiment   referring   to.   Topic  

groups   were   inductively   derived   through   an   initial   read   through   of   the   dataset  

before  any  coding  was  undertaken:  

• Non-­‐Specific  Commentary:  These  are  general  opinions  that  give  no  reference  

to  any  particular  aspect  or  feature  of  the  product.  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  9  

• Characteristics:   What   the   product   offers   (or   does   not   offer)   in   terms   of  

features   and   functionality   of   the   product:   applications,   email,   internet  

tethering,  calendars,  etc.  

• Hardware:   The  physical   infrastructure:   the   screen  size  and   resolution,  build  

materials,  button  placement,  etc.  

• User   Experience:   Comments   relating   to   the   product’s   utility,   ease   of   use,  

speed  in  use,  or  efficiency,  etc.    

• Price:  Retail  price  of  the  product.  

The  reason  for  selecting  an   inductive  method  was  so  that  the  dataset  could   ‘speak  

for   itself’.   Understanding   that   each   brand   is   different,   the   idea   was   to   create  

variables   that  were   specific   to   this   case,   rather   than   to  be   locked   in   to   existing  or  

standard  metrics.  

Coding  variables  –  Behavioural  Sentiment    

New  coding  variables  were  inductively  derived  from  initial  analysis  of  the  dataset  in  a  

similar   fashion   to   the   early   stages   of   thematic   analysis   methods   (as   outlined   in  

Bryman,   2001).   The   dataset   was   analysed   with   particular   emphasis   on   patterns,  

repetitions   and   groupings   within   the   texts   that   could   lead   to   the   emergences   of  

theories   surrounding   the   behaviours   of   the   audience   and   sentiment.   This   process  

identifies  trends  within  the  data,  which  can  then  be  analysed  quantitatively.  The  aim  

was   to   identify   characteristics   of   comments   that   help   to   tell   us   more   about   the  

attitude  and  behaviour  of  the  person  making  the  comment.  In  this  way,  it  was  hoped  

that   findings   would   offer   insight   into   what   is   informing,   influencing   and   driving  

sentiment.    

This   process   resulted   in   the   following   additional   coding   variables   that   attempt   to  

analyse  what  drives  sentiment:  

1.   Experience   (Direct   Experience   vs.   Indirect   Experience):  A  hypothesis  was  drawn  

that  those  with  ‘direct  experience’  (having  either  physically  trialled  or  purchased  the  

product)  would  have  different  opinions  than  those  who  had  a  ‘indirect  experience’  –  

such  as  having  seen  an  advertisement,  read  a  review,  or  seen  a  video  review.  While  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  10  

these  variables  were  derived  inductively  from  the  data,  this  approach  is  also  similar  

to  Schweidel  et  al’s  (2011)  variables  of  ‘customer’  and  ‘non-­‐customer’,  but  this  study  

also  includes  those  who  have  trialled  the  product  but  not  yet  purchased  it.  

2.   Industry   Context   (Stand   Alone   vs.   Comparative):   Another   insight   from   the  

inductive   exercise   was   the   inclusion   of   the   product’s   competitors.   Some   were  

discussing  the  PlayBook  only,  giving  opinion  on  the  product  as  a  stand-­‐alone  subject.  

Other   commenters   were   comparing   the   PlayBook   to   its   competitors   (such   as   the  

Apple  iPad  or  Motorola  Xoom).  The  comments  were  coded  as  either  ‘Stand  Alone’  or  

‘Comparative’  in  order  to  test  what  these  characteristics  can  show  about  sentiment.  

This   also   aligns   with   the   observations   of   Dillon   et   al   (2001)   who   says   that   brand  

sentiment   can   be   influenced   by   comparisons   of   products   and   attributes   with  

competitors.  

3.   Reasoning   (Rational   vs.   Emotional):  When   inductively   surveying   the   dataset,   it  

emerged  that  some  sentiment  had  an  element  of   rational  depth  behind   it,   such  as  

weighing  up  the  pros  and  cons  of  the  product  or  considering  how  the  product  would  

fit   in   with   the   user’s   lifestyle.   Other   comments   were   more   emotional,   knee-­‐jerk  

reactions  that  offered  opinion  with  little  or  no  reasoning  or  rationality.  

4.   Composure   (Passive   vs.   Aggressive):   It   should  be  of   little   surprise   that   Internet  

discussions  get   somewhat  heated,  and   this  was  no  different   for   the   sample   in   this  

case.   If  was   felt   that   the   tone  of   the   comments  had  potential   to   add   value   to   the  

industry  standard  method,  and  therefore  the  sample  was  coded  by  these  variables  in  

ordered   to   give   insight   into   the   areas   of   discussion   that   generate   the   most  

‘passionate’  sentiment.  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  11  

Findings  and  Analysis  

What  ‘Positive,  Negative,  Neutral’  and  Topic  analysis  models  tells  us    

The   system   of   coding   sentiment   as   ‘Positive,   Negative   or   Neutral’   provides   the  

headline  figures  as  shown  below:  

 

This   offers   a   general   snapshot   of   the   online   sentiment   towards   the   BlackBerry  

PlayBook   –   there   is   a   lot   of   ‘opinionated’   discussion,   with   most   of   the   audience  

having   decided  whether   they   are   a   fan   of   the   product   or   not   -­‐   but   provides   little  

insight  into  behaviour,  and  therefore  has  little  value  in  terms  of  strategy  formation.  

Insight  through  deep  sentiment  analysis:  Adding  Value      

Topic  breakdown:  Share  of  Voice  

 

0% 20% 40% 60% 80% 100%

Sentiment Positive - 48%

Neutral - 14%

Negative - 38%

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  12  

This  output   indicates   that   the  dominant   topic  within  conversations   is   ‘Non-­‐Specific  

Commentary’   –   e.g.   the   audience   expressing   unfocused   sentiment   such   as   ‘I   love  

this!’  or  ‘this  looks  rubbish’.  Therefore,  it  can  be  noted  that  43%  of  the  conversations  

about  the  Blackberry  PlayBook  have  little  value  in  terms  of  insight.  ‘Characteristics’  is  

the  next  leading  topic  (27%)  suggesting  that  when  people  do  talk  about  the  product  

in   details,   it   is   the   email,  WiFi,   apps,   etc,   that   is   provoking   the   interest.     The   user  

experience   (UX)   of   the   product   commands   just   12%   of   the   conversation,   which  

initially  suggests  that   it   is  not  the  most   important  topic  to  be  communicating.  This,  

however,   is  misleading  and  misses  a  significant  opportunity,  which  will  be  explored  

later  in  this  paper.  

Insights,   strategic   recommendations   and  direction   are   still   difficult   to   extract   from  

this  (admittedly  granular)  selection  of  metrics.  All  we  have  determined  is  that  most  

people  commenting  are  either  positive  or  negative,  and  when  they  are  not  creating  

noise  (Non-­‐Specific  Commentary)  they  are  talking  about  a  range  of  characteristics.  

One   of   the   three   empirical   research   questions   this   paper   asks   is,   ‘how   can   new  

variables  and  analysis  add  value   to   sentiment  analysis  and   inform  communications  

strategy?’  In  other  words,  ‘by  adding  additional  behaviour  metrics  to  the  analysis  of  

topic  and  sentiment,  can  we  produce  data  to  better  inform  strategy?’  The  following  

sections  explore  this  in  further  detail.  

Positive,  Negative,  Neutral  sentiment  against  topics  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  13  

 

An   improvement   on   the   previous   metrics,   the   data   here   indicates   that   brand  

sentiment  performs   slightly  better  within   some  areas  of   conversation,  but  with  no  

real   indication  as  to  why  this   is   the  case.  While  these  differences   in  sentiment  and  

brand  health  within  topics  can  be  observed,  this  output  still  has   limited  capacity  to  

inform   strategy   as   it   presents  only   a  basic   temperature   gauge  of  online   sentiment  

within  these  topic  groups.    

 

 

As   the   above   figure   shows,   viewing   the   above   three   topic   areas   independently   of  

their  Share  of  Voice  shows  how  sentiment  differs  depending  on  what  aspect  of  the  

product  is  being  discussed.    

From  here,  the  new  behavioural  sentiment  variables  can  be  analysed  against  specific  

segments  of   the  data   that   combine   topic   and   sentiment.   To  demonstrate   this,   the  

behavioural   sentiment   variables   will   be   applied   to   three   segments   of   data:   Non  

Specific/Negative,  User  Experience/Positive,  and  Characteristics/Negative.  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  14  

   

 

Behavioural  Sentiment  Variables  

1.  Experience:  Direct  Experience  vs.  Indirect  Experience  

 

 

The  data  shows  that  there  are  large  differences  between  the  underlying  behaviours  

that  drive  Positive  and  Negative  sentiment  within  different  topics  of  discussion.  The  

Negative  sentiment  associated  with  Non  Specific  commentary  predominantly  comes  

from   people   who   have   never   used   the   product.   In   contrast,   the   vast   majority   of  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  15  

those  commenting  on  the  User  Experience  of  the  PlayBook  are  influenced  by  having  

trialled   or   purchased   the   product.   This   new   behavioural   sentiment   variable   shows  

that   there   are   vastly   different   conditions   within   positive   and   negative   sentiment.  

This  clearly  offers  more  insight  into  sentiment  than  the  standard  metrics  of  “Positive,  

Negative,  Neutral”.  

 

Examples:  

“It's  a  shame  this  device  is  so  obviously  weak.”  

“Another  FAIL.    Much  like  all  of  the  other  Blackberry  junk.”  

Non  Specific,  Negative,  Indirect  Experience  

 

“The  gesture  is  the  best  thing  about  the  playbook.  Very  easy  to  use.”  

“I  have  one.  It's  GREAT  !  Love  to  swipe.  Hate  pushing  buttons.”  

User  Experience,  Positive,  Direct  Experience.  

 

 

Analysis    

This   value-­‐added   methodology   allows   for   analysis   of   where   the   fickle   and  

disingenuous   conversations   surrounding   a   product   or   brand   are   taking   place.   This  

could   be   within   positive   or   negative   sentiment,   within   topics,   or   within   channels.  

Viewing   topic  analysis   in  a   silo  would  point   to   ‘Non-­‐Specific  Commentary’  as  a  key  

topic  area  (due  to  it’s  dominance  in  Share  of  Voice),  but  with  further  insight  provided  

by  new  variables,  this  topic  group  is  shown  to  house  knee-­‐jerk,  emotional  reactions  

to  the  product  based  on  attitudes  towards  the  brand.    

In  addition,  this  methodology  allows  for   identification  of  the  rational  and   informed  

conversations  that  are  influenced  by  Direct  Experience  with  the  product.  Topic  and  

keyword   analysis   done   by   automated   services   and   based   solely   on   ‘Positive,  

Negative,  Neutral’  can  be  misleading  due  to  the  lack  of  human  analysis  and  insight.  

Identifying  rational  commentators  and  conversation  areas  can  help  achieve  a  more  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  16  

targeted   strategy.   In   this   case,   the   high   level   of   Indirect   Experience   within   the  

Negative  sentiment  indicates  a  high  level  of  disingenuous  ‘noise’  that  is  clouding  out  

the  rational  and  genuine  discussion.  

This   improved   insight   can   filter   out   the   noise   and   inform  who   you   should   engage  

with  and  why,  thus  influencing  and  aligning  with  the  organisation’s  communications  

strategy.   Furthermore  when   trended   over   time   as   an   overlay   to   Schwiedel   et   al’s  

study   (2011),   this   methodology   can   begin   to   map   whether   certain   types   of  

sentiment,   driven   by   specific   variables   (for   example   direct   or   indirect   experience  

with  a  product)  are  typically  expressed  over  certain  channels.  Imagine  the  marketing  

department  at  RIM  knowing  that  emotionally  displayed  negative  sentiment  driven  by  

a   non   direct   brand   experience   was   statistically   more   likely   to   occur   on   Twitter,  

whereas,   rational,   informed  negative   sentiment  driven  by  a  direct  experience  with  

the  product  was  statistically  more  likely  to  occur  within  forums.    

Understanding  whether  experience  of  the  product  is  affecting  what  the  audience  is  

creating  and  consuming  can  have  substantial  ramifications  for  brands  that  transcend  

marcomms  and  start  to  impact  on  product  release  cycles  and  product  development.    

If  a  product  is  being  released  into  the  marketplace  without  an  effective  groundswell  

of  informed  opinion,  it  is  more  likely  that  the  audience  are  going  to  draw  their  own  

conclusions  of  the  product  based  on  pre-­‐existing  prejudices.  Having  an  appropriate  

appreciation  of  the  brand  health  of  your  own  organisation  and  the  competitors  will  

help  to  determine  whether  an  audience,   left  to  their  own  opinions,  will  be  positive  

or  negative,  which  directly  affects  what  the  release  strategy  for  the  product.  

 

 

 

 

   

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  17  

2.  Industry  Context:  Stand  Alone  vs.  Comparative  

 

Within  these  Sentiment/topic  groups,  we  can  see  that  there  are  differing  results.  The  

Non   Specific/Negative   comments   are   more   likely   to   contain   a   reference   to   a  

competitor   product,   meaning   that   casual   offhand   commentary   will   compare   the  

Playbook   to   its  competitors   in  a  negative   light.  This   is   less   so  within  both   the  User  

Experience/Positive  group  and  the  Characteristics/Positive  group.  

 

Examples  

“No  thanks,  I'll  stick  with  my  iPad.”  

“This  can't  be  for  real.  The  Playbook  is  such  a  colossal  fail.  There  are  

other  tablets  besides  the  Playbook  and  the  iPad  2.”  

Non  Specific,  Negative,  Comparative  

 

“Amazing  device,  in  fact  I  am  using  it  right  now  to  comment  :)  RIM  

has  definitely  outdone  themselves  with  this  tablet.  Everything  on  it  is  

easy  and  fun  to  use.”  

“Think  you’re  good  at  multi-­‐tasking?  Wait  till  you  experience  it  on  the  

BlackBerry  Playbook!”  

User  Experience,  Positive,  Stand  Alone  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  18  

Interestingly,   when   topic   groups   are   taken   out   of   consideration,   there   is   actually  

little  variation  in  sentiment:  

Overall  sentiment  when  discussing  PlayBook  on  its  own:  

 

Overall  sentiment  when  discussing  PlayBook  in  context  of  competitors:  

 

 

This   shows   the   value   of   analysing   the   behavioural   sentiment   variables   within   the  

context  of  the  different  topic  groups.  In  many  cases,  the  data  becomes  more  useful  

and  insightful  when  cross-­‐referenced  across  several  variables.  

Analysis    

Traditionally,  measurement  of  market  position  online  has  been  comparative  –  share  

of   voice   against   competitors,   sentiment   towards   competitor   brands   –   without   an  

appropriate   understanding   of   whether   this   is   actually   harming   your   brand   or   not.  

This   aspect   of   the  methodology   allows   the   brand   to   see  whether   the   presence   of  

competitors   in   the   conversation   is   having   a   positive   or   negative   effect   on   brand  

health.   This   reveals   more   ‘hidden’   data   within   the   landscape,   and   allows   the  

0% 20% 40% 60% 80% 100%

Positive - 49%

Neutral - 15%

Negative - 36%

0% 20% 40% 60% 80% 100%

Positive - 47%

Neutral - 11%

Negative - 42%

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  19  

marcomms   professionals   a   new   lens   on   competitive   advantage   –   for   instance,   if  

positive  sentiment   increases  despite  the  presence  of  a  higher  profile  competitor,   it  

would   be   recommended   that   marcomms   introduce   competitive   comparison   into  

activity  to  increase  brand  health  perception.  

 

3.  Reasoning:  Rational  vs.  Emotional  

 

Again,  the  above  graph  shows  that  there  are  significantly  different  conditions  within  

various  sentiment  groups.  The  Negative  commentary  that  is  Non  Specific  in  nature  is  

also   emotionally   expressed   and   influenced,   rather   than   rational,   informed   or  

considered.  This  negativity  was  often  influenced  by  preconceived  attitudes  towards  

the   brand   or   product,   which   were   expressed   in   absence   of   consideration   of   any  

potential  redeeming  factors  that  this  product  can  offer.  In  contrast,  we  can  observe  

that   the   positive   sentiment   within   both   User   Experience   and   Characteristic   topic  

groups   is   expressed   rationally.   Here,   the   commentators’   expressions   were   more  

based   on   how   the   product   could   positively   enhance   their   experiences   rather   than  

how  the  product  made  them  feel.  

 

 

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  20  

 

 

Examples  

“Guy  next  to  me  on  subway  has  a  PlayBook.  Considered  tossing  him  

some  spare  change  so  he  could  buy  a  real  tablet,  warm  meal.”  

Non  Specific,  Negative,  Emotional.  

 

“Quality  screen  without  backlight  bleed,  decent  stereo  speakers,  

decent  amount  of  RAM,  USB  and  HDMI  as  well  as  an  OS  that  that  is  

more  than  an  app  drawer  with  with  basic  task  switching  which  is  

primarily  designed  to  be  a  storefront.”  

Features,  Positive,  Rational.  

 

“Wow,  I  have  to  say  it  but  the  playbook  handles  orientation  changes  

on  responsive  sites  better  than  the  iPad.  No  jerkiness  at  all.”  

User  Experience,  Positive,  Rational.  

 

 

Analysis      

As  mentioned  earlier,   the  Non   Specific   topic   group  has   the  highest   Share  of  Voice  

within   the   dataset.   The   findings   within   this   topic   group   demonstrate   that   the  

commentators  who  are  producing  what  might  be  seen  as   ‘irrelevant’,   ‘throw-­‐away’  

criticisms   of   Blackberry   based   on   a   negative   bias   towards   the   brand   are   having   a  

disproportionate  effect  on  the  brand  health  metrics.  This  is  an  example  of  the  kind  of  

insight   that   this  methodology   can   deliver,   as   opposed   to   the  metrics   of   “Positive,  

Negative,  Neutral”.  

People   commenting   negatively   are   likely   to   express   opinions   based   on   a  

preconceived   admiration,   aversion   or   expectation   towards   the   product.   In   other  

words,  people  are  reacting  to  the  product  on  the  basis  that  the  product  is  made  by  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  21  

Blackberry,   rather   than   reacting   upon   a   consideration   of   the   attributes   of   the  

product  itself,  particularly  so  for  Negative  comments.    

Brands   such   as   Apple   and   Nike   have   found   success   in   encouraging   an   emotional  

connection   to   the   brand.   However,   by   contrast,   companies   such   as   Dell   (at   the  

height  of  the  ‘Dell  Hell’  crisis  that  engulfed  the  company  in  2005  and  affected  share  

price)   and   Motrin   (who   were   on   the   receiving   end   of   a   consumer   backlash   to   a  

advertising   campaign   in   2007)   have   been   adversely   affected   by   an   emotional  

response  to  their  brands.  

Understanding   whether   emotion   or   reason   is   driving   brand   health   positively   can  

have   a   substantial   impact   on  marcomms   activities   –   this   is   best   achieved   through  

human  analysis  of   content   for   the   same   reasons  as   sentiment  analysis  detailed  on  

page   three:   the   unreliability   of   keyword   identification   in   determining   human  

behaviour.  

 

4.  Composure:  Passive  vs.  Aggressive  

 

While  none  of  the  above  categories  are  predominantly  aggressive,  we  can  see  that  

there   are   differing   levels   of   aggression   within   the   topic   groups.   The   Non  

Specific/Negative   comments   were   around   five   times   more   likely   to   be   aggressive  

than   Positive   comments   on   the   products   Characteristics.   On   a   whole,   negative  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  22  

commentary   was   twice   as   likely   to   be   aggressive   than   positive   commentary,  

regardless  of  the  topic.    

 

Examples  

“You  honestly  think  this  is  better  than  the  Xoom?  I  doubt  it.  The  Xoom  

is  wonderful.  RIM  is  shit.”  

“I  just  can't  believe  RIM  went  into  the  tablet  market  with  this  

lackluster  tablet!”  

Non-­‐Specific,  Negative,  Aggressive  

 

“I  have  had  my  Playbook  for  over  three  weeks  now  and  already  I  do  

not  want  to  live  without  it.”  

User  Experience,  Positive,  Passive  

 

 

Analysis    

It   is   important   to  understand  which  areas  of   the  business  and  product  are  seen  as  

passive  and  passionate,  to  ensure  that  the  tone  of  marcomms  activity  is  speaking  to  

the  audience  needs  and  interests,  and  that  the  brand  are  not  seen  as  out  of  touch.  

Isolating   passionate   discussions   shows   differing   results   within   topic   groups,   and  

analysis  of   these   results  emphasises   the   significance  of  User  Experience  within   the  

data.   When   compared   side-­‐by-­‐side,   the   passionate   commentary   shows   that   the  

audience   is   predominantly   negative   for   all   topics   except   UX,   where   it   is  

overwhelmingly  positive.    

This   is   going   to  be  different   for   every   client,   depending  on   competitive   landscape,  

product  release  cycle,  expectations,  reputation,  etc.  It  is  not  as  simple  as  saying  that  

passion   breeds   consumer   loyalty,   because   passion   can   equally   be   negative   as  

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  23  

positive,  and  if  the  audience  chooses  to  target  this   in  the  wrong  way,   it  can  have  a  

negative  effect  on  the  brand  health.    

In  this  case,  it  is  important  to  counter  the  audience  passion  towards  the  product  –  all  

the  data  suggests  that  calm,  rational  conversation  about  the  products  strengths  are  

being  drowned  out  under   the  weight  of   the   irrational,  passionate  sentiment  about  

perceived  weaknesses.   However,   it   is   important   to   promote   passionate   sentiment  

towards  the  UX  as  this  has  a  positive  effect  on  brand  health.  

 

Conclusion    

User   Generated   sentiment   around   a   brand   or   product   is   increasingly   becoming   a  

metric   organisations   are   keen   to   track   and   understand.   Negative   and   positive  

sentiment  is  a  key  brand  health  indicator,  so  being  able  to  track  trends  in  this  area  is  

an  objective  of  many  organisations.    

This  new  methodology  of  measuring  online   sentiment  offers   insight  by   identifying,  

isolating  and  analysing  the  behaviour  behind  sentiment  expression.  This  results  in  a  

wider  and  more  detailed  insight  into  sentiment.  This  deep  insight  can  therefore  act  

as  a  guide  and  indicator  as  to  how  organisations  might  wish  to  address  certain  issues  

across  this  spectrum.  This  value-­‐added  system  of  sentiment  analysis  can  provide  an  

insight  into  strategy  formation,  whereas  automated  measures  of  sentiment  acts  only  

as  a  temperature  check,  and  does  not  have  the  sufficient  depth  required  to   inform  

tactics  or  strategy.  

The  ability  to  understand  whether  the  audience  are  creating  or  consuming  content  

that   is   promoted   by   an   informed   response   to   the   product   is   important   to   note,  

especially   if   it   is  having  a  positive  or  negative  effect  on  brand  health.  By  analysing  

Direct  Experience  and  Indirect  experience,  this  method  allows  for  the  identification  

of  where  the   informed  conversations  are  taking  place,  and  how  the  brand/product  

sentiment  is  performing  in  these  areas.    

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  24  

Some   conversations   surrounding   the   product   compare   it   to   its   competitors,   and  

some   do   not.   Identifying   and   isolating   these   conversations   can   offer   insight   as   to  

how  this   relates   to  sentiment.  Strategically,   this   information  can  offer   insight  as   to  

how  marcomms  campaigns  will  consider  competitor  products.  

Reasoning  and   composure   variables   can  offer   insight   as   to   the   range  of   sentiment  

tone,   and   how   this   differs   among   conversation   topics.   This   information   can   offer  

valuable  strategic  insight  in  terms  of  responding  to,  or  fitting  in  with,  these  tones.    

In  summary,  this  paper  has  added  strategic  value  to  the  limited,  yet  often  employed,  

brand   sentiment   metrics   of   ‘Positive,   Negative,   or   Neutral’.   By   coding   data   in   a  

Quantitative   Content   Analysis   methodology   and   using   human   analytical   methods,  

datasets  pertaining  to  products  and  brands  can  provide  valuable  insight   into  online  

brand  health  and  inform  communications  strategy.  

 

Research Paper – Sentiment: Indication vs. Insight

Human  Digital  ©  2011  |  25  

Bibliography    

Bryman,  A  (2001),  Social  Research  Methods  

Dillon,   William   R.,   Thomas   J.   Madden,   Amna   Kirmani   and   Soument   Mukherjee  

(2001),  “Understanding  What’s   in  a  Brand  Rating:  A  Model  for  Assessing  Brand  and  

Attribute  Effects,”  Journal  of  Marketing  Research,  38  (4),  415-­‐429  

Pang  &  Lee  (2008)  Opinion  Mining  and  Sentiment  Analysis,  Foundations  and  Trends  

in  Information  Retrieval  archive  Volume  2  Issue  1-­‐2,  January  2008    

Schweidel,  David  A.,  Moe,  Wendy  W.  and  Boudreaux,  Chris,   Listening   in  on  Online  

Conversations:   Measuring   Brand   Sentiment   with   Social   Media   (September   2011).  

Available  at  SSRN:  http://ssrn.com/abstract=1874892  

J.   Zabin   and   A.   Jefferies,   “Social   media   monitoring   and   analysis:   Generating  

consumer   insights   from  online   conversation,”  Aberdeen  Group  Benchmark  Report,  

January  2008.