Promotional Reviews: An Empirical Investigation of Online Review ...
THE INVESTIGATION OF ONLINE CONSUMER REVIEWS AS ...pt016qw3530/...the investigation of online...
Transcript of THE INVESTIGATION OF ONLINE CONSUMER REVIEWS AS ...pt016qw3530/...the investigation of online...
THE INVESTIGATION OF ONLINE CONSUMER REVIEWS AS INTENTIONAL
SOCIAL ACTIONS.
A DISSERTATION
SUBMITTED TO THE GRADUATE SCHOOL OF BUSINESS
AND THE COMMITTEE OF GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Sophia Zak
June 2010
This dissertation is online at: http://purl.stanford.edu/pt016qw3530
© 2010 by Sophia Vladimir Zak. All Rights Reserved.
Re-distributed by Stanford University under license with the author.
ii
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Dale Miller, Primary Adviser
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Francis Flynn
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Elizabeth Mullen
Approved for the Stanford University Committee on Graduate Studies.
Patricia J. Gumport, Vice Provost Graduate Education
This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file inUniversity Archives.
iii
iv
ABSTRACT
Online reviews play an increasingly important role in consumers’ purchase decisions. Such
internet-enabled Word-of-Mouth communication offers society a tremendous potential to reduce
information asymmetries and in this way, increase the efficiency of electronic and traditional markets
(Dellarocas, 2005). Voluntary reporting, however, introduces the potential for reporting biases. In this
dissertation, I argue that consumers’ willingness to post a review on an online forum is conditioned on
the social normative landscape of this public space. I conducted four exploratory investigations of the
social dynamics involved in review posting behavior. Given the broadness of the research agenda,
these studies focus on different aspects of the research question and, as a result, vary in their levels of
analysis and methods. As evidence, I found that review posting behavior was motivated by social
obligations and regulated by social norms. Furthermore, these social motives produced systematic
biases in review valence distributions. The implications of these findings are clear: Biased distribution
of online product reviews can lead to inefficiencies in consumer choice and erroneous conclusions
about consumer product preferences.
v
ACKNOWLEDGMENTS
I would like to dedicate this dissertation to my husband, Ilya, my father Vladimir, and my
mother Marina. Their multifaceted support and patience made it possible for me to pursue my passion.
I am also extremely grateful to Dale Miller for the invaluable guidance and support he has provided
throughout my time at Stanford. He should be considered a collaborator on this work as he
significantly contributed to the ideas put forth in this dissertation. Elizabeth Mullen, Francis Flynn,
Larissa Tiedens and Baba Shiv have also been extremely helpful to me over the past five years, and for
that I am very grateful. Additionally, I am indebted to Christal Calderon and Yoel Crane of Epinions
for granting me access to the Epinions review archives and permitting me to survey Epinions members.
Lastly, special thanks to Ravi Pillai, Valery Kuklin and the staff of the Stanford GSB OB Lab for their
tireless efforts in helping me with data coding, data management and statistical analysis that constitutes
this dissertation.
vi
TABLE OF CONTENTS
LIST OF TABLES ..................................................................................................................................IX
LIST OF FIGURES..................................................................................................................................X
CHAPTER 1..............................................................................................................................................1
Introduction....................................................................................................................................1
Defining the Consumer Review Phenomenon ...............................................................................3
Consumer Review Forums as Virtual Communities ......................................................................3
Antecedents of Online Reviews .....................................................................................................5
Social Identity as the Psychological Mechanism ...........................................................................7
Studies Overview ...........................................................................................................................8
CHAPTER 2............................................................................................................................................10
Introduction..................................................................................................................................10
Predictions....................................................................................................................................10
Methods .......................................................................................................................................12
Description of research context: Epinion.com.................................................................12
Data Set ............................................................................................................................15
Results and Discussion.................................................................................................................17
Descriptive Statistics ........................................................................................................17
Hypothesis testing.............................................................................................................18
Exploratory Analysis ........................................................................................................23
Summary ......................................................................................................................................26
CHAPTER 3............................................................................................................................................28
Introduction..................................................................................................................................28
Study 1 Predictions ......................................................................................................................28
Study 1 Methods ..........................................................................................................................30
Questionnaire development ..............................................................................................30
Recruitment ......................................................................................................................30
Survey measures ...............................................................................................................32
Respondents......................................................................................................................33
Study 1 Results and Discussions..................................................................................................33
Hypotheses testing............................................................................................................33
Study 2 Purpose ...........................................................................................................................37
Study 2 Methods ..........................................................................................................................37
Level of analysis...............................................................................................................37
Behavioral measures.........................................................................................................38
vii
Exploratory analysis .........................................................................................................38
Discussion ....................................................................................................................................41
CHAPTER 4............................................................................................................................................44
Introduction..................................................................................................................................44
Study 3 Predictions ......................................................................................................................45
Study 3 Methods ..........................................................................................................................46
Participants .......................................................................................................................46
Procedures ........................................................................................................................47
Measures...........................................................................................................................49
Study 3 Results and Discussion ...................................................................................................50
Manipulation Checks (Full Sample, N=121)....................................................................50
Hypotheses testing (Full sample, N=121).........................................................................53
Data selection ...................................................................................................................54
Manipulation Checks (Subsample, N=59)........................................................................54
Hypothesis Testing (Subsample, N=59) ...........................................................................56
Study 4 .........................................................................................................................................57
Predictions....................................................................................................................................57
Study 4 Results and Discussion ...................................................................................................59
Summary ......................................................................................................................................63
CHAPTER 5............................................................................................................................................65
Introduction..................................................................................................................................65
Study 5 Predictions ......................................................................................................................65
Study 5 Methods ..........................................................................................................................67
Setting...............................................................................................................................67
Procedures (Phase I) .........................................................................................................67
Measures (Phase I)............................................................................................................67
Procedures (Phase II)........................................................................................................68
Measures (Phase II) ..........................................................................................................68
Debriefing.........................................................................................................................68
Study 5 Results and Discussion ...................................................................................................68
Sample attrition and response rates ..................................................................................68
Descriptive Statistics ........................................................................................................70
Hypothesis testing.............................................................................................................74
Study 6 Purpose ...........................................................................................................................77
Study 6 Methods ..........................................................................................................................77
Review sample..................................................................................................................77
viii
Measures...........................................................................................................................77
Study 6 Results and Discussion ...................................................................................................81
Summary ......................................................................................................................................85
CHAPTER 6............................................................................................................................................88
Theoretical Contributions ............................................................................................................88
Practical Implications...................................................................................................................90
Forum administrators........................................................................................................90
Consumers ........................................................................................................................92
Marketers..........................................................................................................................93
Concluding Statement ..................................................................................................................93
REFERENCES........................................................................................................................................95
ix
LIST OF TABLES
Table 1. Epinions archival data correlation table ....................................................................................18
Table 2. Epinions archival data descriptives split by review type ...........................................................18
Table 3. Model summary predicting Epinions review helpfulness..........................................................20
Table 4. Model summary predicting Epinions review valence................................................................22
Table 5. Epinions survey sample accumulation over data collection period ...........................................31
Table 6. Comparison of participating vs. nonparticipating Epinions author sample ...............................31
Table 7. Comparison of sample to Epinions user population ..................................................................32
Table 8. Correlations among behavioral and survey data........................................................................39
Table 9. Frequency of posting contrarian reviews (Full Sample, N=121)...............................................53
Table 10. Frequency of posting contrarian reviews (Subsample, N=59).................................................56
Table 11. Vignette study correlation table...............................................................................................59
Table 12. Patient sample attrition rate between Phase I and II of data collection ...................................69
Table 13. Descriptives and correlations of dental patients study.............................................................70
Table 13. Comparison of dental patients who posted versus those who opted out of posting.................73
Table 14. Model summary predicting dental patients’ review posting rate.............................................73
Table 15. Descriptives: Content themes of dental office reviews............................................................79
Table 16. Descriptives: Dental office review dimensions .......................................................................79
Table 17. Correlations between content dimensions of dental office reviews.........................................81
Table 18. Model summary predicting length of dental reviews ..............................................................84
x
LIST OF FIGURES
Figure 1a. Epinions author PRE-data truncation experience ...................................................................16
Figure 2. Valence distribution of Epinions reviews ................................................................................17
Figure 3a. Valence distribution of Epinions reviews written by inexperienced authors..........................25
Figure 3b. Valence distribution of Epinions reviews written by experienced authors ............................25
Figure 4. Epinion authors’ self recollection of posting rates ...................................................................34
Figure 5a. Epinion authors’ self recollection of posting rates: low self monitor ....................................36
Figure 5b. Epinion authors’ self recollection of posting rates: high self monitors ..................................36
Figure 6. Felt obligation to post contrarian review (Full Sample, N=121)..............................................52
Figure 7. Felt Entitlement to post contrarian review (Full Sample, N=121) ...........................................52
Figure 8. Felt obligation to post contrarian review (Subsample, N=59)..................................................55
Figure 9. Felt entitlement to post contrarian review (Subsample, N=59)................................................55
Figure 10. Relationship between collective identity and contrarian review posting occurrence .............60
Figure 11. Path Analysis: Identity Obligation Positive posting.......................................................60
Figure 12. Relationship between self-monitoring and contrarian review posting ...................................61
Figure 13. Path analysis: Self-monitoring Entitlement Negative Posting........................................62
Figure 14. Dental patient satisfaction distribution...................................................................................70
Figure 15. Distribution of dental patients’ valence of reviews ..............................................................71
Figure 16. Relationship between patients’ tenure with dental office and odds of posting ......................75
Figure 17. Relationship between valence of satisfaction and review posting rate...................................76
Figure 18. Length distribution of dental office reviews ..........................................................................78
Figure 19. Relationship between patient tenure and review length, split by review valence ..................85
1
CHAPTER 1
Introduction
Word-of-Mouth communication (WOM), defined as informational exchange among
consumers about the characteristics, usage, and ownership of products (Kozinets 1997, 2002a;
Bickart & Schindler, 2001; Rothaermel & Sugiyama, 2001; Chevalier & Mayzlin, 2003; Gau &
Gu, 2008), is one of the oldest and most powerful influences on consumer behavior. In fact,
consumers generally regard peers’ advice as more trustworthy and valuable than marketer-
generated information (Price & Feick, 1984; Swartz & Stephens, 1984; Herr, Karden & Kim,
1991), because audience’s attributions of source’s intentions are a key factor in perception of
trustworthiness (Eagly, Wood, & Chaiken, 1978). While WOM has been traditionally spread
among acquaintances through personal “contagions,” the internet has dramatically increased the
scale of WOM communications (Dellarocas, 2003).
Through the internet, for the first time in human history, consumers can make their
thoughts, feelings and viewpoints on products and services easily accessible to a global community
of internet users. Internet is a super megaphone giving individuals’ WOM the kind of reach way
beyond anything previously possible (Solovy, 2000). In this way, online WOM (eWOM), has
become a major informational source since online product reviews can reach far beyond traditional
settings and reach a virtually infinite number of consumers.
Marketing literature confirms that consumers do pay attention to online product reviews
and act upon them to make purchasing decisions (Chatterjee, 2001; Chevalier & Mayzlin, 2006;
Senecal & Nantel, 2004). A study conducted by DoubleClick in 2007 found that eWOM plays a
very important role in consumers’ purchasing process for many types of products, and for some
goods such as electronics and home products, product review websites outrank all other media in
influencing customer decisions. Among web users (70% of U.S. population), content on the web
has moved into second place, ahead of printed reviews and advice from sales people in influencing
consumer decisions (Rubicon Win Marketing, 2008). Bickart and Schindler (2001) even found that
participants who are exposed to online consumer discussions report more product interest than
participants who are exposed to corporate web pages. Furthermore, research in behavioral
economics repeatedly finds positive relationships between user-generated content and product sales
(Chevalier & Mayzlin, 2006; Senecal, Nantel, 2004; Dellarocas et al., 2004; Godes and Mayzlin,
2004). In fact several companies (e.g. Epinions.com, Amazon.com, Citisearch.com,
Angie’sList.com, Yelp, etc.) have recognized a business opportunity in this phenomenon and are
proactively trying to induce consumers to “speak the word” on their online platforms about
products and services (Godes et al., 2005).
Recent behavioral economics investigations of consumer review forums, however, have
discovered systematic expression bias in product review patterns (Hu, Pavlou, Zhang, 2007;
Eliashberg & Shugan, 1997; Chevalier & Mayzlin, 2006). When perceptions of product quality of
2
the buyers who post reviews online differ from those of most consumers, the reviews will yield
systematic biases even if they are truthful reports of perceived quality. This self selection problem
is particularly serious given the powerful influence that online reviews have over consumer
behavior. In addition, the fact that potential buyers may not be aware of these biases, and thus do
not adjust the relevance of different reviews accordingly, has a direct impact on the efficiency of
online review systems. A deeper understanding of the forces that motivate consumers to write
online reviews is therefore emerging as a question for both theoretical and practical significance.
While behavioral economics investigations provide robust evidence for the discrepancy
between product quality and product review distributions, little attention has so far been devoted to
the psychological antecedents of these expression biases. One reason for this gap in the literature
has been the tendency to adopt the rational actor stance, and explain away the behavioral patterns
with a variation of the self interest motive. Although the parsimony of such a model is tempting,
findings of more qualitative work (e.g. Henning – Thurau. T, 2004; Sundaram, Mitra, Webster,
1998) suggest a more complicated motivational model. In fact, the most rigorous investigation of
eWOM antecedents found that social motives such as concern for other consumers, and
enhancement of the community, to be more powerful predictors of posting reviews than more
egocentric motives, such as venting of emotions and economic incentives (Henning-Thurau, et al.,
2004).
The overarching objective of this dissertation is to investigate eWOM, in particular
consumer reviews, as a speech act with the social goal of sustaining meaningful exchange of
information. (Grice, 1989; Ho & Swan, 2004). I adopt the social psychological lens and argue that
the production of online consumer reviews is a social process motivated by social obligations
toward the audience and regulated by social norms. Although it is a virtual truism that traditional
WOM is an act of cooperative communication, the important differences between offline and
online contexts warrant an investigation in order to establish the later as a social act as well.
This dissertation consists of six Chapters. In this first chapter I lay out my argument for
the role of social norms and obligations in consumers’ decision to post a review. In Chapters 2
through 5 I describe four empirical investigations of social dynamics involved in review posting
behavior. In Chapter 6, I discuss the practical and theoretical implications of my findings.
In this chapter, I put forth a social psychological model for the occurrence of online
consumer reviews. I first define eWOM and explain why I have chosen to focus on consumer
reviews in particular. I then draw on ethnographic work of online consumer communities to argue
that like communicators of traditional WOM, authors of online reviews perceive their audience as
socially real. Next, based on previous research of the antecedents of traditional WOM, I propose
that the valence of consumer experience qualifies the strength and form of social pressures
involved in people’s decision to post reviews. This in turn can lead to systematic expression
biases, similar to those already discovered in behavioral economics investigations of online review
3
forums. Finally, I introduce the concept of shared social identity with the audience as the driving
force/psychological mechanism behind these social influences. I conclude this chapter with a brief
overview of the studies that make up the empirical portion of this dissertation.
Defining the Consumer Review Phenomenon
WOM is defined as all types of informational exchange among consumers about the
characteristics, usage, and ownership of products and services. It is considered a major driver of
consumer adoption and diffusion of new products, particularly for late adopters (Banerjee, 1993;
Biyaogosky et al., 2001; Brown and Reingen, 1987; Eliashberg et al., 2000; Eliashberg & Shugan,
1997; Krider & Weinberg 1998; Buttle 1998; Herr et al., 1991; Mahajan et al., 1984; Sheth, 1971).
While WOM has been traditionally spread among acquaintances through personal “contagions”,
the internet has dramatically increased the scale of WOM communications (Dellarocas, 2003).
Through electronically based discussion forums, bulletin boards, list serves, chat rooms
and newsgroups, the internet provides consumers with ability to share knowledge, experience and
opinions worldwide. The popularity of consumer exchange is reflected in vast number of online
gathering places, as well as the number of contributions that are made every day (Hauben, 1999;
Horrigan & Rainie, 2002). 91% of U.S.-based internet users contribute content online (Lenhart,
Horrigan, & Fallows, 2004), and about 35% of users contribute content in the form of online
reviews at least once per year (Max & Mace, 2008). Furthermore, marketing literature suggests
that consumers do pay attention to online product reviews and act upon them to make purchasing
decisions (Chatterjee, 2001; Chevalier & Mayslin, 2006; Senecal & Nantel, 2004). Reichheld
(2003) even claims that the customer’s propensity to recommend a product online is the most
important success measure in marketing today.
While eWOM communication takes place in many forms, I focus on consumer review
forums because they are the most widely used of existing eWOM formats (Henning-Thurau, et al.,
2004; Max & Mace, 2008). These forums provide reviews on almost every area of consumption,
not just a specific field. Furthermore, eWOM articulated on these forums can be expected to exert
a stronger impact on consumers than other forms of eWOM, because they are relatively easy to
operate and require less internet-related knowledge on the part of the consumer (Dellarocas, 2005).
Consumer Review Forums as Virtual Communities
My contention that review posting is an intentional social action is based on the premise
that consumer review forums are virtual communities that act as relevant reference groups for
individual participants (e.g. Kozinets 1997, 2002a; Bickart & Schindler, 2001; Rothaermel &
Sugiyama, 2001; Chevalier & Mayzlin, 2003). Kozinets, in his taxonomy of online communities,
refers to consumer review forums as communities of consumption, and defines them as “affiliative
groups whose online interactions are based on shared enthusiasm for, and knowledge of, a specific
4
consumption activity” (Kozinets 1997 p.254). There are two primary characteristics of these online
environments: first, almost all interaction is mediated by text; second, physical and often temporal
distance exist between participants (Burnett & Buerkle, 2004). Because the virtual community is
constructed through public exchange of texts, the primary mode of behavior within the community
is marked by relations to the group as a whole. Writers of messages undertake their activities with
a group of readers (some of whom are known and others unknown), and readers undertake their
activities within the context of messages that define the group itself (Jones, 1998). .
Since the inception of online communication, the appropriateness of using the community
concept to describe online forums has been debated across multiple lines of work. On one side of
the debate are the social presence and the reduced cues theories, which argue that interaction via
the internet is antisocial. The social presence theory (Rice & Case, 1983; Hiltz, Johnson & Turoff,
1986; DeSanctis & Gallup, 1987) stresses the difficulty of building relationships via the internet
due to lack of non-verbal information, which leaves communication cold and impersonal. In a
similar light, the reduced cues theory states that lack of social and contextual cues undermines self
regulation and self awareness, which in turn reduces the relevance of social norms and constraints
(Kiesler et al., 1984; Sproul & Kiesler, 1986; McGuire, Kiesler & Siegel, 1987). Furthermore, both
low social presence and reduced cues over the internet increase perceived anonymity, freeing
people from social conventions and inhibitions. In this way, this perspective argues against the role
of social obligations and constraints in review posting behavior.
It is important to note that the social presence and reduced cues theories were developed
in the mid-80s and are mostly based on lab experiments. Meanwhile many researchers have shown
by means of field studies that computer-mediated environments can be very rich in socio-emotional
content, and many users develop intimate and meaningful relationships with one another and the
community as a whole (e.g. Rice and Love, 1987; Parks and Floyd, 1996; Parks & Roberts, 1998).
For example, internet users have found ways to circumvent the cold character of cyberspace by
using emoticons (e.g. smileys) and other paralinguistic codes (e.g. capital letters to express
excitement or anger) to communicate emotions and feelings. They have also learned to decode and
interpret such social information. The structure of online forums has also evolved to include a
variety of feedback mechanisms, (e.g. helpfulness ratings of reviews), designed to increase the
availability of social cues, and in this way regulate posting behavior.
Recent investigations of consumer forums suggest that these communities can act as
important reference groups for the individual participants (e.g. Kozinets 1997, 2002a; Bickart &
Schindler, 2001; Rothaermel & Sugiyama, 2001; Chevalier & Mayzlin, 2003; Gau & Bin Gu,
2008). For example, Gau and Bin Gu (2008) have explored the role of social influence in the
formation of consumer product ratings. Using data from CNET.com, they found that recent
consumer reviews have positive and significant influence on subsequent consumer reviews. These
findings suggest that online consumer reviews are not only determined by private product
5
evaluations but also by public opinion. However, the specific mechanism of this social influence
was left to be determined. One possibility is that people’s private opinions shift to converge
toward public opinions, driving their public opinion convergence. Another possibility is that
people’s public opinions (in the form of ratings) shift to converge toward public opinion. Yet
another possibility is that people’s rate of opinion expression via reviews is higher when their
opinions are in line with versus contrary to public opinion. While all three mechanisms could
account for social influence effects on posting activity, this dissertation speaks to the shifts in
opinion expression rates resulting from changes in social obligations and proscriptions.
In this dissertation, I view the act of posting as an integral component of the informational
economics of consumer review forum communities. Participants who share their consumer
experience through posting do so in the spirit of exchange. Rather than simply giving information
away, they are exchanging it for information that may be held by others. That is not to suggest that
these consumer communities employ a highly structured or formalized economy of information
exchange in which information debits and credits are tallied in some type of community ledger.
Instead they are governed more by a spirit of a “gift economy” regulated by social norms and
obligations. If members of consumption communities post in this spirit of gifting, then their
posting behavior should, at least in part, be driven by a generalized feeling of obligation toward
their audience. Furthermore, any posting that may deter rather than enhance the overall quality of
information on the forum, may be suppressed via social proscriptions. Taken together, these social
influences can affect when people choose to post or not to post their consumer experiences online.
I also propose that these social influences create valence-based biases in the overall
distribution of reviews, which can jeopardize the utility of consumer review forums in optimizing
consumer choice. In the next section I draw on findings from work on antecedents of WOM
communication, and propose that people feel more obligated to post positive than negative reviews,
and more restricted from posting negative than positive reviews.
Antecedents of Online Reviews
Little attention has so far been devoted to the antecedents of online product reviews.
Through content analysis of participants’ self reports, Henning-Thurau and colleagues (2004)
found that altruism (i.e. collective self-esteem and concern for other consumers), economic
incentives (resources distributed by consumer forum administrators), and homeostate utility (the
restoration of equilibrium from an unbalanced consumption experience through venting of
emotions) were the most frequently mentioned motives driving review postings. The fact that
participants mentioned altruistic motives with a higher frequency than any other motive categories
suggests that obligations, either toward other individual consumers and/or a collective of
consumers, plays a nontrivial role in people’s decisions to post online reviews.
6
While Henning-Thurau’s investigation did not explore how the valence of consumers’
experiences could qualify the strength of these social obligations, the findings from the most
comprehensive empirical investigation of traditional WOM antecedents suggest that the motives
for engaging in positive WOM may differ from motives for engaging in negative WOM
(Sundaram, et al., 1998). Among the most important differences discovered in this investigation
was a higher occurrence of altruistic motives in people’s self reports for the occurrence of positive
WOM than for negative WOM.
One reason for this difference may be due to review authors’ attempts to accommodate
their audiences’ preferences for positive over negative WOM. In fact, recent work on readers’
perceptions of positive and negative reviews found that positive reviews are seen as more helpful
than the negatives ones (Wu & Huberman, 2008), despite the fact that the effect of negative WOM
on consumer decision making is actually stronger than positive WOM (e.g. Holmes & Lett, 1997;
Mizerski, 1982; Herr et al., 1991). In addition, there is robust evidence in the literature of
complaining behavior that it deviates both from prescriptive and descriptive norms. Thus,
complainers are often labeled as whiners, and are at risk for experiencing embarrassment when
their negative information is attributed to their personality rather than the negative consumer
experience itself (e.g. Laczniak, DeCarlo, & Ramaswami, 2001). Furthermore, consumers with
high self-presentation concerns tend to feel more reluctant to publicly complain because of their
greater concern for impression management (Hong,, & Lee, 2005; Marquis & Filliatrault, 2002;
Kowalski & Cantrell, 1995; Slama & Celuch, 1994).
Negative WOM communication is not only perceived as antinormative, it is also much
less likely to occur than positive WOM (Rossiter & Percy 1997; Chevalier & Mayzlin 2003; Hu
and Zhang, 2006 ). For online reviews in particular, Hu and colleagues (2006) found that for most
products on Amazon.com, reviews are positively skewed. While this pattern may simply reflect
the distribution of actual consumer experiences, I put forth the following social-psychological
account. Social pressures in the form of obligation to the audience promote the posting of positive
reviews, and reluctance to publicly complain prevents the posting of negative reviews. In other
words, the social normative script of review posting behavior increases the expression of
consumers’ positive experiences and decreases the expression of consumers’ negative experiences,
producing a positive bias in products’ review distributions.
The underlying premise of the above prediction is that authors of online reviews are
motivated to adhere to social norms. This premise holds for traditional WOM communication,
where the social ties between communicators and their audiences are strong, and the
communication occurs real-time, through a shared physical space. For online reviews, however,
the social ties between the authors and their audiences are typically weak, and the communication
is mediated through a technology interface (Hoffman & Novak, 1996; Chatterjee, 2001). Despite
the lack of direct social ties between authors and readers of consumer review forums, shared
7
collective identities based around their membership in a particular forum, shared consumption
needs, etc. may exert influence on positing behavior via social norms and obligations (Bagozzi &
Dholakia, 2002). In the next section, I introduce the concept of social identity as the mechanism
that drives the intentional social action component of review posting behavior.
Social Identity as the Psychological Mechanism
The self not only encompasses one’s individual identity, but also comprises social
identities associated with valued group membership. Tajfel (1978) defines social identity as “that
part of an individual’s self concept which derives from … knowledge of membership of [social
group[s] together with the value and emotional significance attached to that membership” (p. 63).
Social identity is closely wedded to norms that define how group members should think, feel, or
behave. Through the process of referent informational influence, the norms of the group are
inferred from prototypical properties of the group. The prototype informs a group of members
what behaviors are typical and hence appropriate, desirable, or expected (Tajfal & Turner, 1986).
In addition, the derogation of deviants of these prototypes, through social feedback mechanisms,
further reinforces the norms, and motivates members’ adherence to them (Abrams, Marques,
Brown, Henson, 2000).
In computer-mediated groups, social identity has been shown to exert influence through
two distinct processes: deindividuation and we intentions. According to the social identity model
of deindividuation effects (SIDE: Postmer, Spears, Lea et al., 1998; Spears & Lea, 1992, 1994),
computer-mediated groups can be very real to their members psychologically (e.g. Bouas & Arrow,
1996; Finholt & Sproull, 1990; Lea, Spears & de Groot, 1998; Postmer, Spears & Lea, 1999),
despite the fact that many of the factors traditionally associated with social and interpersonal
attraction are absent in such contexts. The SIDE model is supported by a range of studies showing
that visual anonymity does not preclude normative behavior (Postmer & Spears, 1998). In fact,
social influence and attraction can actually be stronger in anonymous groups than in settings in
which members are visually identifiable, because anonymity may accentuate the interchangeability
of group members—provided that group members share a common identity (e.g. Lea & Spears,
1991; Lea et al., 1998, Postmer, 1997; Postmer, Spears, and Sakhel & de Groot, 1998; Postmer &
Spears, 1999; Spears, Lea, and Lee, 1990; Walther, 1997; also, see Postmer, Spears & Lea, 1998,
for review). In one such study, a style of social interaction either prosocial or efficiency oriented
was activated with priming method. Although the style was activated successfully, over time only
anonymous groups converged on the primed group norm. In groups made identifiable by means of
portrait pictures, no norm formation was observed. A subsequent study replicated this effect, and
demonstrated that only in anonymous groups did the norm generalize to nonprimed group
members, confirming that social influence is responsible for this behavioral pattern. Their results
8
show that social influence does not require physical presence or visible social cues, but stems from
the power of the group, via social identity (Tajfel & Turner, 1986).
Another line of work shows that social identity can also exert influence through we
intentions ―or the desire to achieve collective goals. These we intentions arise from the self-
categorization process, or the conception of defining one’s self in terms of central features of a self-
inclusive social category, rendering the self stereotypically “interchangeable” with other group
members. In this way, one’s self-esteem is boosted to the extent that one’s ego-ideal overlaps with
that of others, and acting as the other acts or wants one to act reinforces one’s self-esteem (Hogg &
Abrams, 1988; Tajfel, 1981). In applying this concept to virtual communities, Bagozzi & Dholakia
(2002) argue that, through the process of self-categorization, individuals develop we intentions and
engage in behavior that helps achieve collective goals and enhance the welfare of the community
and in this way maintain a positive self-defining relationship with the virtual community. In line
with this argument, Bagozzi and Lee (2002b) found that social identity drives membership
participation in online chat rooms, and this effect is mediated by collective desires (or we
intentions).
Research on deindividuation and we intention suggests that within the context of
consumer review forums, people may be more likely to post positive than negative reviews because
the former are pronormative and the later are antinormative and/or because the former promotes
group/collective goals, while the later does not. Although it is beyond the scope of this dissertation
to discern whether deindividuation, we intentions, or both play a role in review posting behavior, it
is important to acknowledge that we intentions are typically expected to operate in fully
cooperative groups, where members perform coordinated individual actions, and this coordination
entirely determines group action (Bagozzi & Lee, 2002b). Consumer review forums, however, are
only partially cooperative because while members may participate in response to earlier
commutation (i.e. previous reviews, or others’ requests for reviews) or originate communication
(i.e. write the first review about a product), these actions lack the extent of mutual understanding,
commitment, and coordination characteristic of fully cooperative group actions, such as those
observed in chat rooms (Bagozzi & Dholakia, 2002). Thus, we intentions may be less powerful in
consumer review forums. In contrast, the process of social control via deindividuation only
requires that members’ behaviors are fully public/visible, so that members are able to derive the
prototypical member behavior.
Studies Overview
The purpose of this dissertation is to study the social dynamics involved in consumers’
decisions to share their consumer experiences by posting online reviews. Building on the above
research, I propose that online review posting is a social act, regulated by social norms and
obligations. I introduce review authors’ collective identity with their audience as a major
9
determining factor of the relevance of these social pressures on the authors’ decisions to post.
Given the broadness of this research agenda, the four empirical investigations (presented in
Chapters 2 through 5) focus on a different aspect of the research question, and as a result vary in
their levels of analysis and methods. In Chapter 2, I explored an archive of online produce reviews
written by a random sample of 1,000 Epinions members as well as readers’ helpfulness ratings of
those reviews. The goal of this study was to test whether more active authors tend to write more
pronormative reviews and explore possible mechanism(s) driving this relationship. In Chapter 3, I
switched the level of analysis from reviews to the authors of those reviews, and surveyed the
original Epinions author sample in order to directly measure the magnitude of social pressures felt
by the authors to post reviews and the strength of their collective identity with fellow Epinions
members. By merging this survey date with authors’ review posting behavior, I investigate the
relationship between social pressure, collective identity, and review posting behavior on the
Epinions forum. Chapter 4 describes a lab experiment where I investigated how previous reviews
of a service provider (i.e. restaurant) [oblige] and [censor] subsequent review posting of that same
provider. The goal of this study was to investigate product reviews as speech acts, which are
embedded in an ongoing conversation of reviews. The final investigation, described in Chapter 5,
is a field experiment where I tracked the frequency with which consumers choose to post a review
of their health service provider (i.e. dentist), when solicited to post a review by a third party. The
goal of this study was to explore how valence of the experience, strength of relationship between
the consumers and their audience, as well as the strength of relationship between the consumer and
the service provider affect consumers’ odds of posting. While these four investigations all focus on
the role of social regulation in online consumer review posting, the idiosyncrasies of context and
operationalizations of variables across the studies limit the generalizability of the findings across
studies. In other words, as the organization of this dissertation implies, the four phenomenological
studies should be taken on their own merit.
10
CHAPTER 2
Introduction
This study is an archive of naturally occurring product reviews posted by a random sample
of 1,000 members of one of the largest and oldest consumer review forums, Epinions. The purpose
of this archival study was to test whether members’ valence distribution of reviews converged
toward readers’ preferences for positive over negative reviews, and whether this convergence was
driven by more experienced members. Based on previous research that documented a reciprocal
relationship between volume of members’ group-based activity and the extent to which they are
collectively identified with the group (Bagozzi & Lee, 2002b; Bagozzin & Dholakia, 2002), I
assumed that Epinions authors’ volume of review posting experience operationalized their level of
identification with the Epinions community.
Predictions
Behavioral economics investigations demonstrate that most online product reviews follow
a J-shaped distribution with more extreme positive than extreme negative reviews on Likert-type
scales (see Hu, Pavlou, Zhang, 2007; Eliashberg & Shugan, 1997; Chevalier & Mayzlin, 2006). In
this study I explore whether the positive skew1 in the valence distribution is also descriptive of
Epinions reviews. While this distribution may be a mere reflection of consumers’ positively
skewed tastes, experimental findings show that online review distributions are biased estimators of
consumers’ perception of product quality. For example, Hu and colleagues (2007) conducted a
controlled experiment where all respondents were asked to review a randomly selected product.
When everyone reported their product rating, the result was a unimodal (approximately normal)
distribution, while Amazon’s review distributions of the same product were J-shaped. In other
words, the mean of Amazon reviews was more positive than the respondents’ mean. The
researchers of this study put forth a purchase bias explanation, namely that consumers who
purchase the product and are already more positively predisposed to it, are more likely to write a
review (Hu et al., 2007). This purchase bias explanation assumes that the decision to post is made
in a social vacuum. However, qualitative and ethnographic investigations of the antecedents of
eWOM communication confirm the prevalence of social motivates in posting reviews. Using an
online sample of 2,000 consumers’ self reports of motives for posting reviews, Henning-Thurau
and colleagues (2004) conducted a factor analysis to explore the psychological antecedents of
product reviews. The resulting analyses found that among the most frequently mentioned cluster of
1 Positive skew is intended to mean that there are more extreme positive than extreme negative reviews in the review valence distribution (i.e. the distribution is shaped like the letter J). This terminology is not to be confused with the statistical meaning of positive skew, which implies a longer tail of the distribution in the negative portion than in the positive portion of the distribution.
11
motives, were social motives such as adherence to community norms and pursuit of collective
goals.
In the present investigation, I explore whether the positive skew in the review valence
distributions can also be attributed to social pressures (i.e. community norms and/or collective
goals) favoring positive over negative review posting. The most direct evidence for social pressure
favoring positive over negative reviews was documented by Wu and Huberman (2008), who found
that positive reviews are rated as more helpful than negative ones. This finding suggests that
members are more positively reinforced for posting positive reviews than posting negative reviews.
In addition, the complaining literature illustrates that complainers are at risk for experiencing
embarrassment because their negative information is often attributed to their personality rather than
their negative consumer experience. In line with this finding, consumers with high self-
presentation concerns report feeling censored about publicly complaining, and fear being derogated
by their audience (Marquis & Filliatrault, 2002; Kowalski & Cantrell, 1995; Slama & Celuch,
1994). Assuming that impression management concerns are relevant in virtual communities (Hong
& Lee, 2005), consumers may feel censored from posting their negative experiences. Finally,
assuming that positive reviews are more prototypical of group behavior, as suggested by the J-
shaped valence distribution typically detected in consumer reviews (see Hu et al., 2007 for review),
consumers may use this descriptive behavioral norm to guide their posting behavior. In sum, I
initiate my data exploration with two simple predictions, the confirmation of which would provide
initial evidence for the convergence of members’ posting behavior toward the group’s prescriptive
and/or descriptive norm.
Hypothesis 1: Positive reviews are rated as more helpful than negative reviews. Hypothesis 2: The valence distribution of Epinions reviews is positively skewed, with more extreme positive than negative reviews. I adopt the social psychological lens and conceptualize member participation in online
consumer review forums as intentional social action, resulting from adherence to community
norms (i.e. deindividuation) and pursuit of collective goals (i.e. we-intentions). This premise is
based on previous work showing that the virtual communities become reference groups and
influence behavior through a reward and punishment system that is contingent upon the visibility of
one’s behavior and its alignment with the group’s norms and obligations (Bagozzi & Dholakia,
2002). Online communities, often have a code of conduct that specifies community standards with
regard to behavior, language, content, identity, commercial use, etc. (Kozinets, 1999; Hagel &
Armstrong, 1997). For example, conformity in online environments, shows itself in specific email
writing styles, length of messages, etc. (McCormick & McCormick, 1992; Postmer, Spears & Lea,
1999). Online communities ensure that participants conform to group norms by reproaching the
offender through various feedback mechanisms. Furthermore, community managers build on this
12
tendency to enforce group norms by facilitating community members with reward systems
designed to withhold rewards from norm deviants.
With respect to online behavior, group norms emerge whenever there is prolonged
computer-mediated interaction and communication between people (e.g. McCormick &
McCormick, 1992; Postmer, Spears & Lea, 1999). This community participation, determines the
degree of social involvement and, in this way, drives the effects of community influence on
behavior. In other words, experience, time and development of social relationships, increase the
likelihood of community influence, by diminishing the alienating and impersonal character of
computer-mediated interaction.
Hypothesis 3: The convergence of the valence distribution of reviews toward readers’ preference for positive over negative reviews (hypothesis 2) is driven by the posting activity of experienced members. In other words, reviews of more experienced authors are more positive than reviews of less experienced authors. This pattern may occur as a result of more positive members self selecting into writing
more reviews (implying the above prediction to be driven by between-member effects) or as a
result of members writing more positive and less negative reviews as they become more
experienced and socialized (implying the above prediction to be driven by within-member effects).
While both effects may be at play, it is important to note that virtual communities have low entry
and exit barriers. If a member does not agree with group norms, he can simply switch to a different
community rather than become socialized. In traditional communities this option is less available
and thus pressure to conform (i.e. socialization) is much higher. This important difference between
online and offline social environments suggests that online community formation derives from a
self selection process more than from socialization (Muniz & O’Guinn, 2001). In other words,
members whose behavior fits group norms are more likely to stay and be more active.
Methods
Description of research context: Epinion.com
I used the following five criteria for selecting a consumer review forum that could serve as
the basis for my investigation: 1) abundance of member-generated contributions, 2) lively
participation and high traffic, 3) large number of members, 4) enough variation among them in
terms of community participation and consumer characteristics, and 5) access to member profiles
and review writing activities. With these criteria in mind, I searched for suitable communities
using search engines, trade journals and magazines, as well as snowball inquiry amongst
colleagues, friends, and acquaintances. Many options were scrutinized and dismissed because one
or more criteria were not met. After close investigation, I chose Epinions as a case study because
of its large and active member database, its status as one of the most mature online consumer
13
opinions communities on the web, and its administrators’ willingness to provide a sizable sample
of members’ reviews.
Epinions is a consumer opinions forum that offers consumers the ability to share their
experiences about a wide array of consumer goods and services. The company was originally
developed as a feature for Shopping.com to help their shoppers make informed buying decisions.
However the community turned into such a success that the administrators decided to exploit it as a
separate business unit. The community went online in 1999, with the mission to “help people make
informed buying decisions” by providing a platform for shoppers to share the pros and [cons] of
products they’ve owned or used.[” 2] Epinions grew rapidly, and currently has 2 million registered
members. Nearly 10 years after the start-up, Epinions’ database includes more than 4.5 million
reviews. To access the consumer reviews, no registration is required. However to write a review,
users must register with a member name, password, and a confirmed3 email address. Thus entry
barriers into this community are low.
Although it is beyond the scope of this project to describe the entire structure of the
Epinions site, there are several important features (the Income Share Program, Helpfulness
Ratings, Long Reviews, Express Reviews, and the Personal Webpages) that warrant a brief
description, as they are directly relevant to the research problem.
Income Share Program. To stimulate review generation, Epinions has the Income Share
program. The following excerpt is how the Epinions administrators explain this program to its
members:
Epinions rewards writers who contribute reviews that help other users make decisions. Epinions takes a share of the revenue gained from providing consumers with high-quality information and splits it among all authors based on how often their reviews were used in making a decision (whether or not the reader actually made a purchase)…. Income Share bonuses are not tied directly to product purchases or readers’ helpfulness ratings of reviews but are based instead on more general use of reviews by consumers making decisions. As a result, members could potentially earn as much for helping someone make a buying decision with a positive review as they could for helping someone avoid a purchase with a negative review.4 Although Epinions administrators reassure members that the Income Share program is
determined by an objective formula that automatically distributes the bonuses, the exact details of
the formula are not revealed to the public to prevent gaming or other attempts to defraud the
system.
2 http://www.epinions.com/member/. 3 The requirement to register with a confirmed email address reduces the chances of foul play within review posting activity. 4 http://www.epinions.com/member/income-share-program.
14
Helpfulness Ratings. One of the major ways that Epinions facilitates direct feedback to
members on the quality of their review is by encouraging readers to judge the extent to which they
found each others’ reviews to be helpful on a 5-point likert scale. These helpfulness ratings are not
used in the Income Share calculation, and the strength of the relationship between these two
independent review quality indicators is unknown. Previous research, however, suggests that
reviews that are considered helpful by users are not necessarily influential on consumer choice, and
vice versa (Ghose & Ipeirotis, 2007).
Long Reviews vs. Express Reviews. To further control the quality of reviews that are
incorporated in the Income Share Program calculation, Epinions requires that these reviews,
termed as Long reviews, be at least 200 words in length and include a recommendation for or
against the product or service, referred to as the Bottom Line. However, to avoid opinion
censorship, Epinions also offers members the option to write shorter or Express reviews, which can
be less than 200 words in length, but are not incorporated in the Income Share Program. These
important distinctions suggest that Express reviews may be less constrained by structural factors
(e.g. minimum length) than Long reviews, and not driven by economic incentives, unlike Long
reviews.
Personal Webpage. In addition to the information exchange component, Epinions also
offers opportunities for social interaction. Upon joining, every member is automatically supplied
with a personal homepage that fulfills several social functions. They can reveal personal info to
other community members, such as age, profession, marital status, place of residence, hobbies, etc.
To give extra expression to their constructed identity, members can also post photos, list hobbies,
etc. Each homepage comes with a guest book that can be used to send and receive personal
messages within the community, stimulating social interaction among members. Finally, the
webpage reveals the member’s Web of Trust, a network of people whose reviews the member
trusts, either directly or via another member in his Web of Trust network. This adds a social
networking dimension to Epinions user experiences. Although the administrators of Epinions did
not present me with members’ personal webpages or Web of Trust information, the presence of
such a feature stimulates the community component of Epinions member experience.
Other Activities. Other than generating content in the form of reviews and helpfulness
ratings, members can also offer product advice, participate in forums and chats, and make requests.
The administrators execute no censorship or editing, apart from cases in which members make
indecent or disruptive contributions. Thus members have a large sense of ownership of content on
this forum.
15
Data Set
The data set consists of 6,015 consumer reviews and their accompanying product ratings,
posted on Epinions by a simple random sample of 1,000 active5 Epinions members/authors (an
average of 6.5 reviews per author). The data set also includes 65,536 helpfulness ratings for the
sampled reviews (an average of 10.1 per review). The helpfulness ratings were nested within
reviews, and reviews were nested within authors. Authors’ sign-up dates as well as posting dates
of reviews and helpfulness ratings were time stamped to the nearest minute, allowing for derivation
of some of the key variables of interest. Below are descriptions of the variables included in
hypothesis testing and exploratory analyses.
Review Length. After filtering out html code from the reviews, the log of the character
count for each review was taken. This is a control variable in hypothesis 1 testing.
Review Valence. In addition to writing reviews, authors are also required to rate the
product on 5-point likert scales (anchored at 1=Avoid It!, 2=Below Average, 3=Average, 4=Above
Average, 5=Excellent). These product ratings were used as a proxy of review valence. This is the
predictor variable for hypothesis 1 testing and the dependent variable for the remaining hypotheses
testing. Note that the distribution of the valence of reviews in the data set closely resembles the J
shape (i.e. positive skew) typically found in the literature. (See Figure 1 for distribution of
reviews’ valence.)
Review Helpfulness. The sampled reviews were rated on 5-point scales (anchored at
1=Not Relevant, 2=Not Helpful, 3=Somewhat Helpful, 4=Helpful, 5= Very Helpful). The
helpfulness ratings for each rated review were averaged into a single score. Note, 3,275 of 3,505
reviews included in the analysis were rated, and reviews varied on the number of helpfulness
ratings they received. This is the dependent variable for hypothesis 1 testing.
Author Experience. For each review, the author’s experience at the time she wrote the
focal review was calculated by taking the log of the total number of previous reviews authored by
that member, plus one (i.e. log (# f previous reviews + 1). This is a control variable in hypothesis 1
testing, and a predictor variable in the remaining hypotheses testing. For the main analyses, I
excluded 2,510 reviews for which the member [experience] was greater than 100 reviews, because
only 7 of the 1,000 members wrote more than 100 reviews. These members would have accounted
for more than 40% of the entire review set if not excluded and would have biased the results. (See
Figure 2 for distribution of the Experience variable pre- and post-data truncation.) Although I only
discuss the pattern of findings for the truncated data set, Tables 3 and 4 include a replication of the
analyses with the whole data set.
5 I adopted Epinions’ definition of active members as those who have logged into the Epinions site at least once within the last three months of data collection.
16
Figure 1a. Epinions author PRE-data truncation experience
Figure 1b. Epinions author POST-data truncation experience
17
Review Type. Recall that when posting a review, members choose between writing a
Long or Express review. In order to differentiate between the two types of review activities, this
choice was indicated by a dichotomous variable, Type. In one of the exploratory analyses, this
variable was used as a proxy for presence (Long Type =1) versus absence ((Express Type=0) of
economic incentives to write a review.
Previous Review Helpfulness. The same author’s previous review’s helpfulness ratings
(which occurred prior to the focal review) were averaged into a single score. I explored how this
variable moderated the experience effect on review valence. Note, only 1,484 of the 3,505 reviews
were preceded by a rated review.
Results and Discussion
Descriptive Statistics
Figure 1 displays the valence distribution of reviews in the truncated data set. The valence
distribution is J-shaped, with more extreme positive than extreme negative reviews.
Figure 2. Valence distribution of Epinions reviews
Table 1 provides a correlation table of the primary variables in my analysis. There is a
positive relationship between valence of reviews and their subsequent helpfulness ratings
r(3273)=.142, p<.001), and positive relationship between members’ experience and review
valence. These relations provide initial support for my predictions and warrant the hypothesis
testing described below.
18
Table 1. Epinions archival data correlation table
N 1 2 3 4 5
1 Review Valence 35050 .220** 0.142** -0.013 .098**
2 Author Experience 35050 .531** .406** .595**
3 Review Helpfulness 3275 .572** .542**
4 Previous Rev. Helpfulness 1484 .525**
5 Review Length 3505** Correlation is significant at .01 level (2 tailed).
* Correlation is significant at .05 level (2 tailed).
Table 2 summarizes the difference between Long reviews and Express reviews. Note,
Long reviews are more positive (t(3503)=10.04, p<.0001), perceived as more helpful
(t(3273)=19.30, p<.0001), and written by members with more experience (t(3503)=28.18,
p<.0001), than Express reviews (see Table 2 for descriptive statistics of key variables split by
review type). In order to ensure that the relationships between review valence, review helpfulness,
and member experience are not simply artifacts of the differences among review types, I controlled
for review type in my statistical tests of hypotheses 2 and 3.
Table 2. Epinions archival data descriptives split by review type
T-TEST: Long vs. Express TypeTYPE N Mean SD t df p
Long 2552 3.78 1.33Express 953 3.23 1.74
Long 2552 26.41 26.68Express 953 1.84 5.85
Long 2423 4.34 0.77Express 852 3.8 0.51
Long 1350 4.52 0.69Express 134 3.73 0.66
Long 2552 3386 3160.84Express 953 529 293.41
Review Helpfulness
Previous Rev. Helpfulness
3273
1482
Review Length
10.04
28.18
19.3
12.82
27.86
Review Valence
Author Experience
DESCRIPTIVES
3503
<.0001
<.0001
<.0001
<.0001
<.0001
3503
3503
Hypothesis testing
Hypothesis 1. I predicted that readers should show a preference for positive over negative
reviews as indicated by their helpfulness ratings. I expected authors’ positive reviews to be rated as
more helpful than their negative reviews. The initial correlation results (see Table 1) support the
above hypotheses. However the positive relationship between valence and helpfulness could
simply result from readers’ rational assessments of review quality rather than a preference for
positive over negative reviews. Recall that Long reviews, which are more quality controlled,
longer and rated as more helpful than Express reviews, also tend to be more positive than Express
19
reviews. Another reason for the positive relationship between review valence and helpfulness
ratings could be that experienced members, whose reviews tend to be rated as more helpful, also
tend to write more positive reviews. If more experienced members write higher-quality reviews
than inexperienced members, then the positive relationship between valence and helpfulness could
simply be an artifact of the relationship between valence and experience. I therefore include the
above review quality indicators (review type, length, and member experience) as controls to
establish that readers’ preference for positive over negative reviews is not fully driven by a higher
frequency of positive reviews among experienced authors, and Long type reviews.
I ran a mixed effects model with helpfulness as the dependent variable, valence, length,
experience, and type as predictor variables (i.e. fixed effects), while specifying the intercept to be
estimated separately for each author (i.e. author random effect). As the random effects model
summary indicates (Table 3, Model 1), even when controlling for author, length, experience and
type, valence continues to be a significant predictor of helpfulness β(.0065)=.0403, p<.0001. In
the random effects model, I attempted to control for between-author effects by specifying separate
intercept estimations for each author. However, because reviews were not distributed evenly
among authors, with the majority of authors writing only one review each and some authors writing
up to 100 reviews, the effects detected in this random effects model were too biased, according to
the Hausman Test, and could not reliably test for within-author change.
I therefore ran a more conservative fixed effects model, where I entered member as a fixed
effect (see Table 3, Model 2). The valence effect continued to be significant (β(.0068)=.032,
p<.0001), implying that readers rated positive reviews as more helpful than negative reviews, even
for reviews written by the same author. This robust valence effect supports my hypothesis 1 that
readers have a normative preference for positive over negative reviews.
20
Tab
le 3
. Mod
el s
um
mar
y pr
edic
tin
g E
pin
ion
s re
view
hel
pfu
lnes
s
TR
UN
CA
TE
D S
AM
PL
E (
at E
xper
ienc
e >
100
)F
UL
L S
AM
PL
E
β S.
Ep
β S.
Ep
Inte
rcep
t1.
5145
0.10
8<.
0001
Inte
rcep
t0.
1933
0.07
199
<.00
01
Val
ence
0.04
030.
0065
<.00
01V
alen
ce0.
0247
50.
004
<.00
01
TY
PE
0.66
30.
0379
<.00
01T
YP
E0.
5542
0.02
99<.
0001
log(
Exp
erie
nce)
+1
0.08
180.
0106
<.00
01lo
g(E
xper
ienc
e)+
10.
0612
0.00
56<.
0001
Len
gth
0.35
150.
0169
<.00
01L
engt
h0.
2893
0.01
11<.
0001
β S.
Ep
β S.
Ep
Inte
rcep
t1.
635
0.41
47<.
0001
Inte
rcep
t2.
197
0.32
48<.
0001
Val
ence
0.03
20.
0068
<.00
01V
alen
ce0.
020.
004
<.00
01
TY
PE
0.56
90.
0467
<.00
01T
YP
E0.
444
0.03
55<.
0001
log(
Exp
erie
nce)
+1
0.03
80.
0102
<.00
01lo
g(E
xper
ienc
e)+
10.
050.
0053
<.00
01
Len
gth
0.38
20.
0173
<.00
01L
engt
h0.
295
0.01
09<.
0001
Ran
dom
Effe
cts
Mod
el
(N=
3505
)
Mod
el 1
Ran
dom
Effe
cts
Mod
el (
N=
6015
)
Mod
el 9
Fix
ed E
ffect
s M
odel
(N
=35
05)
M
odel
2F
ixed
Effe
cts
Mod
el (
N=
6015
)
Mod
el 1
0
21
Hypothesis 2. I predicted that Epinions review distribution would be positively skewed.
In fact, the asymmetry in the valence distribution was significant (.732, p<.05), and in the predicted
direction (see Figure 1). Of the 3,505 reviews in the data set, 24.1% (844) of the reviews were
negative, while 64% (2,243) were positive (χ =634, p<.0001). When comparing only extreme
reviews, the pattern is the same: 16.6% (581) of the reviews were extremely negative, and 39.8%
(1,395) were extremely positive (χ=334.67, p<.0001). In other words, Epinions reviews were more
than twice as likely to be positive than negative.
Hypothesis 3. I predicted that more experienced authors’ reviews will adhere to the
readers’ preference for positive over negative reviews. This could be driven by more positive
authors self selecting into writing more reviews and thus becoming more experienced, (i.e. between
author effects). It could also be driven by authors writing more positive reviews and/or less
negative reviews as they become more experienced (i.e. within author differences).
I find initial support for this prediction in the simple correlational analysis, r(3505)=.22,
p<.0001, as well as the mixed model analysis, β(.0258)=.0952, p<.0001 (see Table 4, Model 5).
The model predicting review valence, with author identity entered as a random effect, supports the
account that authors who write more positive reviews, self select into writing more reviews,
perhaps because they are more likely to be positively reinforced via helpfulness ratings for writing
reviews. The socialization interpretation, however, can not be tested when author effects are
specified as random, due to the bias problem detected by the Hausman Test. I ran a more
conservative test for the socialization mechanism by entering in the author effect as a fixed effect
(see Table 4, Model 6). This highly conservative test did not yield a significant experience effect
(β (.0238) =.035, p=.15). Hence, there is no sufficient evidence for the relationship between
experience and valence to be driven by within-author convergence toward readers’ preference for
positive reviews.
Given the above pattern of findings, social pressures may drive authors’ adherence to
readers’ preference for positive reviews in the following way: Because authors who initially write a
positive review are more likely to get positive feedback from readers than authors who initially
write a negative review, these “positive” authors may be more likely to become collectively
identified with their audience. Collective identity would in turn motivate them to write more
positive reviews via we-intentions and/or deindividuation processes. Even in the absence of direct
feedback in the form of helpfulness ratings, authors may vary on the extent to which they perceive
the online virtual space as socially real. Authors with an elevated state of social awareness in the
virtual space would be more likely to feel obliged to post pronormative (i.e. positive) reviews, and
[censored from] posting antinormative (i.e. negative) reviews. In other words, by independently
driving both the valence of reviews and the volume of authors’ reviews, collective identity accounts
for the relationship between author experience and valence of reviews.
22
Tab
le 4
. Mod
el s
um
mar
y pr
edic
ting
Epi
nio
ns
revi
ew v
alen
ce
Ran
dom
Effe
cts
Mod
el N
=35
05
β S.
Ep
β S.
Ep
β S.
Ep
Inte
rcep
t3.
1112
0.05
06<.
0001
3.05
590.
058
<.00
013.
0598
0.55
96<.
0001
Exp
erie
nce
0.09
525
0.02
58<.
0001
0.08
590.
0262
0.00
10.
3811
0.21
410.
075
TY
PE
0.16
210.
0829
0.05
1
Prv
.Rev
.Hlp
0.
1054
0.13
350.
43
Prv
.Rev
.Hlp
X E
xper
ienc
e-0
.066
10.
0474
0.16
4
Fix
ed E
ffect
s M
odel
N=
3505
β S.
Ep
β S.
Ep
β S.
Ep
Inte
rcep
t5
1.09
<.00
011.
609
1.09
540.
142
1.56
21.
0459
0.13
4
Exp
erie
nce
0.03
50.
0238
0.14
50.
008
0.02
650.
752
-0.0
880.
2625
0.73
9
TY
PE
0.04
50.
1191
0.70
5
Prv
.Rev
.Hlp
-0
.08
0.16
760.
633
Prv
.Rev
.Hlp
X E
xper
ienc
e0.
019
0.05
620.
732
Ran
dom
Effe
cts
Mod
el N
=60
15
β S.
Ep
β S.
Ep
β S.
Ep
Inte
rcep
t3.
1232
0.05
08<.
0001
3.08
270.
0574
<.00
012.
990.
4814
<.00
01
Exp
erie
nce
0.03
541
0.01
682
0.03
50.
0333
0.01
690.
049
0.33
80.
1228
0.05
8
TY
PE
0.11
70.
0771
0.12
9
Prv
.Rev
.Hlp
0.
1462
0.10
980.
183
Prv
.Rev
.Hlp
X E
xper
ienc
e-0
.069
60.
0368
0.05
9
Fix
ed E
ffect
s M
odel
N=
6015
β S.
Ep
β S.
Ep
β S.
Ep
Inte
rcep
t1.
609
1.09
540.
142
1.60
91.
0954
0.14
20.
1482
0.99
580.
137
Exp
erie
nce
0.00
20.
0173
0.91
10.
002
0.01
730.
913
-0.0
640.
2247
0.77
4
TY
PE
0.01
40.
1066
0.89
9
Prv
.Rev
.Hlp
-0
.056
0.14
460.
7
Prv
.Rev
.Hlp
X E
xper
ienc
e0.
120.
0459
0.8
Mod
el 3
Mod
el 5
Mod
el 7
Mod
el 8
Mod
el 1
1M
odel
13
Mod
el 1
5
Mod
el 4
Mod
el 6
Mod
el 1
2M
odel
14
Mod
el 1
6
23
Exploratory Analysis
Recall that a major assumption in the above hypothesis testing was that author experience
operationalizes member collective identity or the motivation to adhere to group norms, and/or
promote collective goals. Author experience is confounded with other processes which may also
drive the positive skew observed in the valence distribution of the Epinions data set. The following
is a discussion of some of the alternative, although not mutually exclusive, psychological
mechanisms. Where possible, I also include some exploratory statistical analyses to test, rule out,
and/or control for these alternative mechanisms.
The most obvious alternative explanation for the relationship between author experience
and valence of reviews is that the Income Share Program may provide stronger economic
incentives for positive reviews than for negative reviews. Indeed the average valence of Long
reviews (M(1.74)=3.23), which count toward the shared income calculation, are more positive than
[Short] reviews (M(1.33)=3.78, which do not count, t(3503)=10.04, p<.0001, β(.1046)=.3534,
p<.0001. I attempted to control for this alternative explanation by entering review type as a
control variable. As is evident in Table 4, Model 6, when both type and experience are entered as
predictor variables, the experience effect remains to be highly significant β(.0262)=.0869,
p<.0001, ruling out the possibility that the relationship between experience and valence is simply
an artifact of the relationship between type and valence. However, the type and valence effects are
not fully independent of each other because the type effect is significantly reduced to
β(.0829)=.1621, p=.051, when experience is entered in the model. In fact, the Sobel test revealed
that experience mediates the relationship between type and valence (t(.0140)=3.1932, p=.001).
Hence, type and experience, at least in part, operationalize the same latent psychological
mechanism, be it economic incentives or social motives. Although the Income Share Program
rules explicitly state that valence of a review are not directly tied to the income share calculation, I
can not rule out the possibility that financial incentives are correlated with valence. It is therefore
beyond the scope of this data set to fully rule out the economic incentives mechanism.
Another process that may drive the relationship between experience and review valence is
experience enhancing the effectiveness of post-purchase dissonance reduction through emotional
venting. Previous work on the antecedents of WOM found that people reported dissonance
reduction through emotional venting as a frequent reason for engaging in WOM (Sundaram et al.,
1998). Specifically, consumers can reduce their feelings of buyer’s regret by convincing
themselves and others that they made a correct purchase decision. Studies have shown that when
consumers publically praise a product that they have doubts about, and convince others that the
product is good, their own doubts about the product are reduced (see Engel, Blackwell, & Miniard
1993 for review). Furthermore, the effectiveness of this social proof strategy should be enhanced
by the strength of relationship between the author and her audience (Stuteville, 1968; Engel &
Light, 1968). Hence, as authors become more experienced, they may post more positive reviews in
24
order to reap the psychological benefit from dissonance reduction/ homeostate restoration. It is
important to note that a negative consumer experience may also be emotionally disturbing and
produce anxiety, which can be alleviated through the venting of these emotions. In fact,
homeostate restoration was reported with equal frequency for both positive and negative WOM
behavior antecedents (Sundaram, et al., 1998). Hence, it is unlikely that the overrepresentation of
positive reviews can be explained by dissonance reduction via emotional expression.
Because homeostate restoration utility from emotional venting is determined by the
intensity of the emotional state (Anderson, 1998), this motive should produce a U shape in the
valence distribution of reviews. In line with these self reports, the valence distribution of naturally
occurring online product reviews typically has a U-shape component with an overrepresentation of
extreme reviews and/or underrepresentation of moderate reviews (Dellarocas & Narayan 2006; Hu
et al., 2006). This U shape is present in the Epinions data set as well. As is evident in Figure 1,
there were more than twice as many extreme negative than moderately negative reviews (χ=297.85,
p<.0001), and almost twice as many extreme positive reviews than moderately positive reviews (χ
=225.69, p<.0001). Furthermore, if authors’ experience enhances the utility derived from
emotional venting, then the overrepresentation of extreme reviews and/or underrepresentation of
moderate reviews (i.e. U shape in the distribution) should be more characteristic of experienced
than inexperienced members.
I explored the relationship between experience and overrepresentation of extreme reviews
by comparing the valence distribution of reviews posted by inexperienced authors with the
distribution of reviews posted by experienced authors. As is evident in Figures 3a and 3b, the U
shape appears to be more descriptive of inexperienced authors, rather than those with experience.
In other words, while homeostatic restoration may be an important antecedent of review positing, it
appears to be less important for experienced than inexperienced members. This could be due to
other motives that become more imperative with experience (such as social pressure, economic
incentives, etc.), crowding out this more basic motive.
25
Figure 3a. Valence distribution of Epinions reviews written by inexperienced authors
Figure 3b. Valence distribution of Epinions reviews written by experienced authors
26
One other alternative account for the author experience and valence relationship is that
more experienced authors may also be more savvy consumers. This would allow them to make
better consumer decisions over time and write more positive reviews. In other words, the
overrepresentation of positive reviews is a mere reflection of the valence distribution of authors’
consumption experiences. In other words, as authors become more experienced consumers and
posters, their review distribution would become more positive. Recall that I could not detect a
within-member experience effect and only found evidence for more positive posters self-selecting
into writing more reviews. While I have argued that this self-selection process is driven by
collective identity, I can not rule out the possibility that more savvy consumers self-select into
becoming more experienced posters.
While I have been drawing on the positive relationship between author experience and
review valence to argue for the presence of social influence on review posting behavior, another
important operationalization of social influence in the Epinions data is the direct feedback that
authors receive from readers on their reviews, in the form of helpfulness ratings. In the absence of
direct feedback from their audience, authors may be guided by the general behavioral norm,
obligating them to post positive reviews and/or censoring them from positing negative reviews, but
when they receive direct feedback from their audience, which may or may not be in line with the
general norm for positive reviews, then the normative pressure on subsequent posting behavior
should be weaker. This is because social information in the form of helpfulness ratings is more
concrete than the vague normative bias for positive reviews. In other words, the positive
relationship between experience and valence of review (in hypothesis 3) should be weaker for
reviews that were preceded by the same author’s previous review that was rated as highly helpful.
To test for this negative interaction between experience and previous review helpfulness, I ran a
mixed model with valence as the dependent variable, and experience, previous review helpfulness,
and the interaction of experience with previous review helpfulness as the predictors. I also entered
author as a random effect. As the model summary indicates (Table 4, Model 7), the results do not
support this prediction. Both the main effects and the interactions were not significant. It is
important to note that only 1,484 of 3,505 reviews were preceded by helpfulness ratings of the
authors’ previous review, and were included in this analysis. This subsample was not random and
was confounded with experience and valence. Specifically, the reviews included in the analyses
were written by more experienced authors and were also more positive, so the analyses may have
suffered from a limited range problem.
Summary
In this archival study, I explored whether the patterns in the Epinions behavioral data fit
the notion of review posting as an intentional social action driven by collective identity with the
27
audience. The statistical analyses support the prediction that the overrepresentation of positive
reviews and/or underrepresentation of negative reviews in the review valence distribution are
driven by author experience. As the above discussion suggests, which specific latent variables are
operationalized by author experience is up for debate. While I attempted to statistically control for
some of these alternative explanations, the methods of this study do not permit for a direct test of
the psychological mechanisms because these variables are unobservable. In other words, this
exploratory study is weak in construct validity, but because the data sample was randomly selected
from actual review posting behavior, the documented phenomenon has mundane realism.
28
CHAPTER 3
Introduction
The archival investigation in Chapter 2 provided initial evidence for the characterization of
review posting behavior as a speech act. Specifically, I argue that the convergence of more active (and
presumably more collectively identified) authors’ reviews with readers’ preference for positive reviews,
supports the role of social influence on online review posting. This social influence account assumes
that authors who self-select into writing more reviews do so because they are more collectively
identified with their audience than authors who only write one or two reviews, and are therefore more
motivated to adhere to the group norm favoring positive over negative reviews. The key element in the
social influence account is the presence of a positive opinion expression bias resulting from social
pressure favoring positive over negative reviews. The behavioral pattern observed in the Epinions data
could also be an artifact of self-selection, such that consumers who are more positive people (i.e. more
biased for positive information) self-select into becoming more active authors. Also, perhaps members
who self-select into writing more reviews are also the ones who make better consumer decisions, and
therefore have more positive experiences to report than those who only write a few reviews. Neither of
these alternative explanations for the behavioral patterns requires the presence of an opinion expression
bias.
These alternative accounts and the social pressure account are not mutually exclusive and
unobservable in the archival data set. In order to tap into the social-psychological mechanisms of
Epinions members’ posting behavior, I requested permission from Epinions’ administrators to survey
their members. I was permitted to solicit the random sample of 1,000 members of the original archival
data set to participate in a survey, which allowed me to obtain more direct measure of the proposed
process measures (i.e. collective identity, felt social pressure, opinion expression rate).
The survey data allowed me to conduct two studies. Study 1 had three objectives. First, I
tested for the presence of social pressure promoting positive opinion expression and/or inhibiting
negative opinion expression. Second, I explored the relationship between collective identity, social
pressures, and opinion expression rates. Third, I introduced self-monitoring as a moderator of the
collective identity effect, to test whether conformity to group norms accounts for the collective identity
effects on opinion expression rates. The main objective of Study 2 was to merge participants’ survey
data, with their actual posting behavior, derived from the Epinions’ archive. By merging the two data
sets, I explored how the Epinions community functions as a reference group and how the pressure to
adhere to group norms affects posting patterns.
Study 1 Predictions
Social identity is closely wedded to norms that define how group members should think, feel,
and behave. Through the process of referent informational influence, members infer the norms of their
group from prototypical properties of the group. In turn, this prototype informs a group of members
29
about what behaviors are typical and hence appropriate, desirable, or expected (Tajfel & Turner, 1986).
Research has shown that computer-mediated groups can develop a meaningful and strong sense of
identity (e.g. Bouas & Arrow, 1996; Finholt & Sproull, 1990; Lea, Spears & de Groot, 1998; Postmer,
et al., 1999), despite the fact that many of the factors traditionally associated with social and
interpersonal attraction and with normative influence are absent in such contexts. A network analysis of
a large sample of computer-mediated group structures revealed that group norms defined
communication patterns within groups, and conformity to these norms increases over time (Postmer, et
al., 2000). Within the context of online communities, where members share a common sense of
identity but are anonymous, conformity has been shown to be driven by deindividuation. Specifically,
member anonymity has been shown to accentuate the interchangeability of group members, driving
member convergence toward prototypical group behavior (e.g. Lea & Spears, 1991; Lea et al., 1998;
Postmer, 1997; Postmer, Spears, Sakhel & de Groot, 1998; Postmer & Spears, 1999; Spears, Lea, &
Lee, 1990; Walther 1997; see Postmer, Spears, and Lea, 1998 for review).
In the Epinions community, positive reviews are more pronormative than negative ones,
because positive reviews are not only more favored by readers but are also more common than negative
reviews. If Epinions members are motivated to abide by this prescriptive and descriptive norm, then
they should feel more obligated to post reviews about their positive consumer experiences and/or
inhibited from posting reviews about their negative consumer experiences.
Hypothesis 1: Posting rate of negative experiences will be lower than the posting rate of positive experiences. I also measured members’ collective identity with the Epinions community and had them
recall the frequency with which they published reviews about their positive product/service experiences
versus the frequency with which they published reviews about their negative product/service
experiences. In this way, I investigated how positive and negative opinion expression rates shifted as a
factor of members’ collective identity. I predicted that collectively identified members would report
higher levels of conformity than their weakly identified counterparts.
Hypothesis 2: Strength of identification with the Epinions group will increase positive posting rate and decrease negative posting rate.
In order to verify that the above effects are mediated by the motivation to adhere to social
norms, I administered the self-monitoring scale, which measures the extent to which participants were
sensitive to their social surroundings and motivated to adjust their behavior to fit normative
expectations (Snyder, 1986). I explored how this individual difference measure moderated the effects
predicted in hypothesis 2. Specifically, if the effect of collective identity on posting rates is stronger for
high than for low self monitors, then norm conformity, at least in part, drives the effect of collective
identity on review posting behavior.
Hypothesis 3: The positive (negative) effects of identification on positive (negative) posting rate will be stronger for high than for low self monitors.
30
To summarize, I argue that Epinions members post reviews about their positive consumer
experience more frequently than about their negative consumer experiences due to the need to conform
to the behavioral norm which favors positive over negative reviews.
Study 1 Methods
Questionnaire development
The questionnaire was fine-tuned on the basis of an ongoing informal netnography that I
started from the moment I gained entry to the Epinions community in October of 2008. As part of this
ongoing research, I reviewed reviews, forum discussions, chat sessions, member profiles, members’
personal homepages, guest book messages, etc. I also performed in-depth interviews with several
community administrators.
Pretesting was performed in two sequential stages. First, a draft of the questionnaire was
pretested in personally administered interviews with two organizational behavior academics who
evaluated domain representativeness, item specificity, and clarity of constructs. The second pretest
involved administering an online version of the questionnaire to 10 members and 2 Epinions
administrators. They were asked to indicate any ambiguity or other difficulties they experienced in
responding to the items, as well as offer any suggestions they deemed appropriate. After both tests,
items that were identified as problematic were either revised or eliminated, and new items were
developed. Demographics and internet profile information was placed at the end of the survey, where
respondents also entered in their email address in order to be eligible for the incentives used to boost
participation.
Recruitment
The survey population consisted of a random sample of 1,000 active6 Epinions members.
Recruitment was realized through an email announcement sent out by one of the administrators, which
contained a direct link to the online survey, explained the purpose of the study, and guaranteed the
confidentiality of responses. Halfway through the data collection period, another email was sent out
with a similar announcement. To encourage members to participate, I offered members the incentive
of a 10% chance to win a $25 VISA gift certificate. The survey was online for two and one-half weeks,
generating 209 responses. Eleven randomly selected participants received $25 VISA gift certificates.
Because the email was sent out to only 1,000 “active” members, I had a reliable means of
determining the response rate of 21%, which was higher than the standard 15% response rate that
Epinions administrators typically got from running their own surveys. The higher response rate is
6 In the survey, I define active member, as a member who has logged into the Epinions site at least once in the last 5 months. Note, in the archival data set, active member was defined as a member who has logged into the Epinions site at least once in the last 3 months. The reason for this difference in definitions is the 2-month lag between archival data collection and subsequent survey data collection.
31
probably due to the inclusion of a monetary incentive for participation as well as a curiosity factor
resulting from the survey’s academic affiliation. Table 5 lists the percentages of respondents per day.
Initial response was high. In the course of the first 5 days, the survey generated 57% of the final
number of responses. The number of responses gradually lessened, and during days 3 to 6 stabilized to
about 15 to 24 per day. On day 7, another email with the survey announcement was send out, at which
point the response rate peaked for the next few days, and tapered off, until the end of the data collection
period.
Table 5. Epinions survey sample accumulation over data collection period
n % n %Day 1 37 17% Day 8 17 83%
Day 2 28 30% Day 9 13 89%
Day 3 24 41% Day 10 7 92%
Day 4 20 50% Day 11 7 95%
Day 5 15 57% Day 12 4 97%
Day 6 16 64% Day 13 3 99%
Day 7 25 75% Day 14 3 100%
It is important to note that the recruitment procedure may have created two selection biases.
First, there is the critical issue of over-sampling of more active and dedicated members. Because of the
recruitment method, members who are more likely to read emails from Epinions administrators were
more likely to be exposed to the recruitment message, and thus had a higher chance of being in the
survey. However, a comparison between the average tenure, volume of posting activity, average review
valence, and average helpfulness rating of participating and non-participating members revealed no
significant difference between the two groups (see Table 6).
Table 6. Comparison of participating vs. nonparticipating Epinions author sample
Variable Name N Mean SD t df p
NonPartc. 790 1.59 1.44Partc. 203 1.78 1.97
NonPartc. 790 2.76 8.16Partc. 203 3.27 7.64
NonPartc. 787 3.01 1.7Partc. 202 3.04 1.69
NonPartc. 703 3.67 0.6Partc. 178 3.66 0.67
NonPartc. 790 1.73 7.96Partc. 203 2.47 7.67
NonPartc. 790 1.03 1.55Partc. 203 0.8 0.61
DESCRIPTIVES T-TEST
Review Valence
Tenure
Experience
Long Rev. Experience
Review Helpfulness
991 0.126
COUNT, total review -0.806 991 0.42
Time since joining (years) -1.53
COUNT, total Express reviews -1.18 991
Average Valence of reviews -0.168 987
Variable Description
Average Helpfuless of reviews
Express Rev. Experience 0.238
COUNT, total Long review 2 991 0.046
0.866
0.246 880 0.792
A second bias can arise from the financial incentive to participate in the survey, which may
have attracted commercially oriented members more than altruistically oriented members. I gauged the
presence of this bias by comparing the number of long reviews (which are part of the income share
program) written by participants versus nonparticipants. In fact participants wrote fewer long reviews
than nonparticipants (M(.61)=.8, M(1.03)=1.55, respectively, p<.05), suggesting that the selection bias
32
may be in the opposite direction, favoring [collectively] rather than commercially oriented authors. The
presence of this bias weakens the conservativeness of my hypothesis testing because, by selecting for
less commercially (and presumably more communally) oriented authors, it may overestimate the effect
of collective identity on posting behavior.
Survey measures
Demographics. The survey included a number of questions regarding each respondent’s basic
demographic characteristics (e.g. gender, age, race, income, and education). These responses revealed
that 54% of our sample was male and 84% white, with the average age of 45 years. The median income
level was $60,000–$100,000 per year, and 71% percent of our participants identified themselves as
having earned at least a bachelor’s degree. A comparison between the characteristics of the sample, and
the Epinions user population, as described by Quantcast.com7 (see Table 7), revealed that the sample
closely reflected the population characteristics, except that the participants of the survey were more
educated than the Epinions user population as a whole.
Table 7. Comparison of sample to Epinions user population
Demographics Sample Population*% Men 54% 55%
Average Age 45 44
% White 79% 84%
Median Income $60K-$100K $60K-100K
% with BA Degree 71% 65%
*Estimates from Quantcast.com
Internet use. In addition to collecting demographic data, I also asked respondents to answer
several questions about their internet usage. The sample represents a group of fairly sophisticated
internet users: 80% had over 5 years’ internet experience, 58% spend more than 10 hours/week online,
81% made at least one online purchase in the last 12 months, and 96.5% are members of at least one
other online virtual community.
Recollection of review posting rate. Because I collected data by means of a survey, I could
only capture members’ recollections of their posting rate of negative and positive experiences. They
were asked to recall, “Since joining Epinions, what percent of the times when you were satisfied with a
product/service did you write a positive review?” and “Since joining Epinions, what percent of the
times when you were dissatisfied with a product/service did you write a negative review?” Participants
indicated their answer on 11-point likert scales, anchored at every tenth percentile from, <1% to 100%.
I randomized the order of these questions, in order to counterbalance any order effects.
7 Quantcast (http://www.quantcast.com/) is a media measurement, web analytics service that allows users to view audience statistics for millions of websites. Quantcast Corporation's prime focus is to analyze the internet's web sites in order to obtain accurate usage statistics and in this way derive users’ demographic profiles.
33
Collective Identity. To assess each respondent’s level of collective identification with the
Epinions community, I included items from Luthanen and Crocker’s (1992) collective self-esteem
scale. Specifically, I drew 4 items representing the Importance to Identity subscale (Luthanen &
Crocker, 1992) in order to form a measure of an individual’s level of identification with the Epinions
community. The items were the following: 1) “Overall, my Epinions membership has very little to do
with how I feel about myself,” 2) “My membership in Epinions is an important reflection of who I am,”
3) “My membership in Epinions is unimportant to my sense of what kind of person I am,” and 4) “In
general, belonging to my Epinions group is an important part of my self image.” For each item, the
participants’ ratings ranged from (1) Strongly Disagree to (7) Strongly Agree. After reverse coding
items 1 and 3, I averaged responses to these items in order to create an overall score for each
individual’s level of identification with the Epinions community. The reliability for the overall scale
was α =.79.
Self-monitoring. Participants were asked to fill out the revised Lennox and Wolfe (1984) Self-
monitoring Scale. This 13-item scale measures personal changes in self-presentation to fit the social
setting and expressive behavior of others. The following are two representative items from this scale:
1) “Once I know what the situation calls for, it is easy for me to regulate my actions accordingly,” and
2) “ I have trouble changing my behavior to meet the requirements of any situation I find myself in.”
For each item, the participants’ ratings ranged from (1) Strongly Disagree to (7) Strongly Agree. After
reverse coding some of the items, I averaged responses of the 13 items in order to create an overall
score for each individual’s level of self-monitoring. The reliability for the overall scale was α = .93.
Respondents
Average time to complete the survey was approximately 14 minutes, while the median time
was closer to 11 minutes. In total, 219 Epinions members completed the survey. Of these, 13
participants were dropped from our analyses for one of the following reasons: 1) skipping 20% or more
of the questions, and 2) spending an extraordinarily short time on the survey, which might indicate their
lack of attention. Removal of these cases left 206 respondents for analysis.
Study 1 Results and Discussions
Hypotheses testing
In the following section I present results from a series of analyses that test my predictions
concerning the effects of valence of experience, identification, and self-monitoring on posting rates of
Epinions members’ consumer experiences.
Hypotheses 1 and 2. I predicted that the posting rate of negative experiences will be lower
than the posting rate of positive experiences (hypothesis 1), and that this effect will be amplified by
collective identity, such that identification with the Epinions community will increase positive opinion
expression, and decrease negative opinion expression (hypothesis 2). A repeated measure GLM, with
34
valence of experience as a within-subject factor, revealed a significant valence effect F(206)=95.76,
p<.001, supporting hypothesis 1. Furthermore, collective identity interacted with this valence of
experience effect F(206)=5.61, p=.019, such that the valence effect was stronger for high than for low
identifiers.
I also predicted that the above interaction would be driven by identification, simultaneously
increasing frequency of positive review posting and decreasing the frequency of negative review
posting. In other words, I expected there to be a positive relationship between identification and
positive review posting, and a negative relationship between identification and negative review posting.
Follow-up regression analysis confirmed that high identifiers reported higher frequency of positive
review posting than low identifiers (β(.140)=.440, p=.002). However, I was unable to find evidence for
high identifiers reporting lower frequency of negative review posting than low identifiers (β(.14)=
-.033, p>.8). As a result, while the overarching valence X collective identity interaction effect was in
the predicted direction, it was entirely driven by the positive effect of identification on positive review
posting rate and not on the negative effect of identification on negative review posting rate (see Figure 4
for visual display of the interaction pattern).
Figure 4. Epinion authors’ self recollection of posting rates
Hypothesis 3. I predicted an interaction between identification and self-monitoring, such that
the qualifying effect of identification on posting frequency would be driven by high self monitors. In
other words, self-monitoring would magnify the opposing identification effects on positive and negative
Collective Self Esteem Split into 3 equal groups at 2.25 and 3.50
Error bars at 95% CI
35
review posting rates. To test for this prediction, I regressed positive posting rate, on identification, self-
monitoring, and their interactions. The complete model revealed a positive and significant self-
monitoring X identification interaction (β(.12)=4.12, p=.001). Note, when I regressed negative review
posting rate on identification, self-monitoring and identification X self-monitoring interaction, neither
the identification main effects nor the interaction were significant (see Figure 5 for a visual
representation of the pattern in the data).
36
Figure 5a. Epinion authors’ self recollection of posting rates: low self monitor
Figure 5b. Epinion authors’ self recollection of posting rates: high self monitors
37
Interestingly, there was a significant negative main effect of self-monitoring on negative
review posting (β(.056)=-.793, p<.001). The presence of the self-monitoring effect on negative opinion
expression is in line with the consumer complaining literature which documented consumers with high
self-presentation concerns feeling more reluctant to complain (Marquis & Filliatrault, 2002; Kowalski
& Cantrell, 1995; Slama & Celuch, 1994). This is because people who complain are often labeled as
whiners and feel embarrassment when the negative information is not well received and the audience
disapproves of the complaint and/or the complainer (Kowalski, 1996).
To summarize, the data supports my prediction that identification increases the rate of positive
experience posting and that this effect is driven by normative pressure. However, there is no evidence
for identification decreasing the rate of negative experience posting through censorship. One possible
reason why I was unable to detect evidence for censorship is that the obligation to post reviews may
increase with collective identity across valence. In fact, previous research has found that collective
identity drives the overall volume of community participation. As a result, the censorship effect may
have been wiped out by the opposing obligation effect, producing the observed null effect of
identification on negative review posting. Furthermore, the negative relationship between self-
monitoring and rate of negative review posting suggests the presence of social pressure inhibiting
negative opinion expression, but this inhibition is not tied to the specific norms of Epinions, but rather a
more general tendency to derogate complainers.
Study 2 Purpose
The main weakness of the archival study described in Chapter 2 is the construct validity of
collective identity, which was operationalized by author experience. Because there are a number of
feasible alternative mechanisms which can account for the experience effect, there is no direct evidence
for the proposed role of authors’ collective identity on their behavior (i.e. the valence of their reviews).
In lieu of this weakness in the survey, I included Luthanen and Crocker’s (1992) collective self-esteem
scale, a reliable and valid measure of collective identity (see Hogg et al., 2004 for review). By
exploring the strength of the relationship between participants’ collective identity scores gathered
during the survey, and the volume and type of their review posting activity, I assessed the role of
collective identity on authors’ behavior.
Study 2 Methods
Level of analysis
In the archival study, the level of analysis for the archival data set was on the review level in
order to test for both within- and between-author effects. When merging the archival data with the
survey data, I switched the level of analysis of the combined data set to the author level. I chose this
level of analysis because the acquired survey data was administered once to each author, not each time
the author wrote a review. In other words, no within-author collective identity effects on review
38
valence could be explored or detected. Hence, the behavioral data from the archive was aggregated for
each author and merged with that author’s survey data.
Behavioral measures
Experience. Author experience was derived by counting the total number of reviews posted by
each author. The analysis was conducted on 202 authors. Five of 207 authors who participated in the
survey were excluded from the analysis because their experience was more than 10 standard deviations
from the mean experience count. Note that these authors were also excluded from the archival analysis
in study 1 for the same reason.
Express review experience. This variable wad derived by counting the total number of express
reviews posted by each author. Recall that express reviews are not part of the income share program
and have no minimum word limits.
Long review experience. This variable was derived by counting the total number of long
reviews posted by each author. Recall that long reviews are part of the income share program and have
a minimum word limit.
Review valence. Review valence was the arithmetic mean of valence of reviews posted by
each author.
Review helpfulness. Helpfulness was the arithmetic mean of the reviews’ helpfulness scores
for each author. Recall that the helpfulness scores of each review was an arithmetic mean of the
readers’ helpfulness ratings of that review.
Note, I compared the means of these variables for authors who participated versus those who
did not participate in the survey. These comparisons (Table 6) allowed me to assess the severity of the
self selection problem in the survey. The only significant difference between participants and non-
participants was that participants wrote fewer long reviews than non-participants (see discussion
above).
Exploratory analysis
Effects of collective identity on posting behavior. Table 8 summarizes the correlations
between the survey and the behavioral variables.
39
Tab
le 8
. Cor
rela
tion
s am
ong
beh
avio
ral a
nd
surv
ey d
ata
N1
23
45
67
89
1P
osti
ve p
osti
ng r
ate
202
-0.0
76.2
11**
0.10
20.
068
0.02
0.06
60.
059
0.06
5
2N
egat
ve p
osti
ng r
ate
202
-0.0
37-.2
94**
-0.0
520.
021
-0.0
54-0
.039
-0.0
19
3C
olle
ctve
Ide
ntty
20
00.
065
.268
*-0
.039
0.26
9**
0.22
3**
0.09
7
4Se
lf M
onit
orin
g 19
50.
036
0.02
90.
034
0.06
20.
125
5E
xper
ienc
e (5
)20
2-0
.044
.997
**.1
45*
.250
**
6E
xpre
ss R
ev. E
xper
ienc
e 20
2-0
.124
-0.0
360.
108
7L
ong
Rev
. Exp
erie
nce
202
.147
*.2
40**
8R
evie
w V
alen
ce
202
0.09
7
9R
evie
w H
elpf
ulne
ss
178
**C
orre
latio
n is
sign
ifica
nt a
t .01
leve
l (2
taile
d).
*C
orre
latio
n is
sign
ifica
nt a
t .05
leve
l (2
taile
d).
Survey Data Behavioral Data
40
The positive relationship between collective identity and experience (r(200)=.268 p<.001)
provides initial support for the operationalization of collective identity with experience. Collective
identity is also positively correlated with the average valence of authors’ reviews (r(200)=.223,
p=.002).8 In fact, the strength of the relationship between collective identity and valence is stronger
than the relationship between experience and valence (r(202)=.145, p=.039). This pattern suggests that
the relationship between valence and experience may be explained by the relationship of both of these
variables to collective identity. In other words, collective identity may serve as the psychological
mechanism that accounts for the relationship between experience and valence. When I regressed
review valence on both collective identity and experience, the relationship between experience and
valence was reduced from β(.015)=.032, p=.04 to β(.016)=.02, p=.2, while the relationship between
collective identity and valence remained significant (β(.089)=.244, p=.006). Follow-up analysis did not
yield a significant mediation test (Sobel t=1.19, p>.2), although the pattern is in the predicted direction.
Furthermore, the timing of the data collection does not allow for a valid test of the causal direction of
the relationship among the variables, because the predictor variable (collective identity) was gathered
after the dependent variables.
In my analysis of the archival data, I made a tentative assumption that review type
operationalized the presence of economic incentives. Because long reviews are part of the Income
Share Program, I suggested that economic incentives would crowd out collective incentives to post
long reviews but not express reviews, which are not economically incentivized. This would imply that
the relationship between collective identity and volume of review posting experience should be driven
more by express review posting volume than my long review posting volume. In support of this
proposal, the merged data set yielded a positive relationship between collective identity and express
review experience (r(200)=.269, p<.000)1, but no significant relationship between collective identity
and long review experience (r(200)=-.039, p>.05. This pattern provides initial evidence for the
crowding out of collective motives by financial incentives in long review posting behavior. It also
validates the operationalization of the presence of economic incentives by review type.
Finally, I explored the relationship between collective identity and helpfulness of authors’
reviews. Recall that on the review level of analysis, author experience was a strong predictor of a
review’s helpfulness. This relationship was replicated on the author level of analysis (r(178)=.250,
p=.001), but the positive relationship between helpfulness and collective identity (i.e. the proposed
psychological mechanism) was not significant (r(176)=.097, p>.1). In other words, highly identified
members did not receive higher helpfulness ratings on their reviews than their weakly identified
8 Despite this relationship, follow-up analysis revealed that more collectively identified authors write both more positive (r(200)=.274, p<.001 and more negative reviews r(200)=.225, p=.001 than less collectively identified authors, although the relationship is stronger for the positive than for the negative reviews (F(198)=13.36, p<.001).
41
counterparts. This null effect may be due to lack of power in the aggregated subset of the original
review sample.
Effects of self-monitoring on posting behavior. It is interesting to note that there is no
significant relationship between self-monitoring and either authors’ experience or the average valence
of their reviews. Recall that in the survey analysis, self-monitoring was negatively related to negative
opinion expression rates via posting. Based on the complaining literature, I attributed this effect to
magnified impression management concerns of high self monitors, which inhibited them from posting
their negative opinions in case it put them at risk of being derogated by the audience as whiners. If self-
monitoring inhibits authors from posting negative reviews, then I expected to find a negative
relationship between authors’ self-monitoring scores and their average valence of reviews. I did not
detect a significant relationship between self-monitoring and review valence (r(200)=.062, p>.3),
suggesting that while impression management may have driven authors to underreport their rate of
complaining, these concerns did not affect their volume of complaining. The null self-monitoring effect
on posting behavior is in line with previous investigations of online group-based behavior, which shows
that for anonymous computer mediated communities, impression management concerns do not typically
account for observed norm conforming behavior (Bagozzi & Dholakia, 2002). Rather, more
unconscious processes such as deindividuation, which springs from collective identity, drive adherence
to norms (see Postmer, Spears, & Lea, 2000 for review).
Discussion
The purpose of Study 2 was to explore how the Epinions community functions as a reference
group and how the pressure to adhere to group norms affect participation patterns in the community via
review posting behavior. My overarching prediction, pertaining to the Epinions data set, was that
authors’ collective identity drives their positive opinion expression and inhibits their negative opinion
expression, which in turn increases the average valence of their posted reviews. Although the survey
and archival data yielded both a positive relationship between collective identity and positive opinion
posting rates, as well as a positive relationship between collective identity and the average review
valence, I could not conduct the full path analysis due to the timing of the data collection. Specifically,
the independent variable (i.e. collective identity) and the mediator variable (i.e. opinion posting rates)
were preceded by the dependent variable (i.e. average review valence). Nonetheless, the significant
relationship between authors’ collective identity with the Epinions community and their actual review
posting behavior provides a compelling case for the role of social motives in review posting behavior.
While the strength of the relationship may be inflated, due to the possibility of a selection bias in the
survey sample for more engaged and presumably more collectively identified members, the fact that
these two variables were collected two months apart and through very different methods emphasizes the
robustness of this relationship.
42
But even if this relationship is present, there is no direct evidence for it driving the observed
positively skewed J-shaped review valence distribution on the product level of analysis, which
originally inspired this investigation. Many antecedents simultaneously drive this complex social
behavior, and the relevance and strength of these antecedents may vary due to contextual factors,
individual differences, and the countless interactions among them. The purpose of this study, however,
was not to claim that social motives are ubiquitous and dominant in review posting behavior. Rather, I
sought to illustrate that review forums can function as reference groups for authors of reviews, and
reviews are not posted in a social vacuum.
The generalizability of the findings from the Epinions data set to other consumer review
forums is limited for two main reasons. First, the direction of the relationship between social pressures
and the shape of the review valence distribution depends on the specific behavioral norm that is shared
among its users. Although the J-shaped distribution appears to be the most common review valence
distribution for products (see Hu et al., 2007 for review), other review forums may be characterized by
norm-favoring negative reviews or moderate reviews. For example, in forums that review services,
readers may find negative reviews more helpful than positive ones, because this negative information
would save them from forming negative relationships with bad service providers. In forums that review
high-priced items such as cars and appliances, readers may find more moderate and balanced reviews
more helpful, because they would let them know about both pros and cons of the product before making
an important purchase decision, which may reduce the severity of buyer’s regret.
The strength of social pressures on review posting behavior depends on the strength of its
community component. Recall that on Epinions, authors must register with a valid email address in
order to post reviews, members can communicate by posting messages on each others’ personal web
pages, and readers can provide authors with feedback via the helpfulness ratings which are available for
authors on their personal web pages. Sites like Citysearch and InsiderPages, on the other hand, do not
require authors to confirm their email addresses when registering to post a review, reviewers cannot
directly communicate with each other, and although readers can rate the reviews on helpfulness, the
main function of this information is to help other readers rather than provide feedback to authors. At
the other extreme, forums such as Yelp incorporate advanced social networking functionalities and
sophisticated reputation systems, allowing members to develop their social identities and interpersonal
relationships with fellow members.
The extent to which social pressures account for the shape of the review valence distribution
also depends on its relative strength to other motivating factors involved in review posting, such as
financial incentives and relationship with the product/service provider. The presence and strength of
these factors are at least in part determined by the specific structure of the forum. For example,
Epinions has an income share program designed to financially incentivize members to post reviews,
which may crowd out the social motives to post. On Yelp, the structure of the forum is designed not
only to involve consumers but also to make the product/service providers active participants. Hence,
43
review posting behavior on Yelp may be driven by authors’ relationship with the product/service
provider in the form of positive and negative reciprocity motives. Angie’s List shares this characteristic
with Yelp, but unlike Yelp and most other consumer review forums, the role of the administrators in
managing the content and quality of reviews is much more prevalent.
The above discussion highlights the diversity of consumer review forums, suggesting that the
generalizability of the descriptive findings from the first two studies of this dissertation are limited.
Although previous research of other review forums such as Amazon.com have also found J-shaped
review distributions, these distributions may be driven by a different motivational model than the one I
derived for Epinions. Furthermore, other review forums may not share the J-shaped review distribution,
even though social pressure may be even more prevalent, depending on the structure of the forum.
Despite the limited generalizabilty of the patterns found in the Epinions data, these first two
studies achieved the main objective of this dissertation, which is to undermine the premise shared
among behavioral economics investigations of online reviews, namely that online reviews are posted in
a social vacuum. The exploratory findings provide initial evidence for consumer reviews not just being
mere representations of consumer experiences, but also products of the social context of the forum.
While the specific characteristics of the social pressures may differ across the forums and even within
forums, their presence and effect on posting behavior is likely to escalate over time, due to the growing
trend of introducing social networking functionalities into the structures of consumer review forums.
44
CHAPTER 4
Introduction
The archival and survey Epinions data demonstrate that online product reviews are socially
motivated acts of speech. In this study, I draw on Grice’s postulation that speech acts promote and
sustain meaningful conversation (Grice, 1989; Ho & Swan, 2004), and propose that online product
reviews derive from conversational exchanges largely consisting of previous reviews. Specifically, I
explore how authors’ intentions to post reviews are affected by the presence and valence of previous
reviews of the same product/service.
The argument that consumers influence each other in forming and disseminating their opinions
is supported by investigations of the temporal evolution of large archives of product reviews. These
studies find that product reviews are not only determined by product quality but also by prior reviews as
well as professional reviews. Using data from CNET.com, Gau and Gu (2008) found a positive
influence of previous reviews’ valence on subsequent reviews’ valence, leading to an information
cascade or group polarization patterns within product review distributions. However, Wu and
Huberman (2008) found the opposite anti-polarization effect, whereby previous comments and ratings
of movies elicit contrarian views that soften previous opinions, especially for previously negatively
reviewed movies.
Another line of inquiry found that the visibility and volume of previous reviews affects
subsequent posting frequency, although the strength and direction of these effects varies across opinion
valence and product type. In one investigation, Dellarocas and Narayan (2006) document a lower
posting frequency for tangible products that have been heavily reviewed, especially if the previous
reviews are positive. The investigators attributed this effect to a decrease in author motivation to post,
due to a diminishing marginal influence of subsequent reviews on public opinion. For movies, however,
a large number of previously posted reviews increased subsequent reviewers’ propensity to post reviews
for the same movie, even though past a certain volume, additional reviews add little to what has already
been said (Dellarocal, Award & Zhang (2004)).
Taken together, these archival studies provide initial evidence for previous reviews,
influencing both the content and occurrence of subsequent reviews; but because these investigations
were conducted on a product level of analysis, the psychological mechanisms that drive this temporal
evolution of reviews was unobservable. In the present study, I switch the level of analysis from product
to author level and conduct a controlled experiment designed to test the effect of social norms on
authors’ posting intentions. The experiment is designed to test for an internally valid causal story for
how social normative pressures in the form of social obligations and censorship can affect consumers’
decisions to post a review about a product/service. I take the position that, like other speech acts,
reviews are complex social phenomena, governed by a complex system of motivational factors. Hence,
I do not claim that the specific causal story I propose and illustrate through the hypothesis testing has
wide spread generalizability or fully accounts for the temporal evolution of review posting behavior
45
observed on the product level of analysis. Rather, I seek to provide an internally valid example of
social motivations arising from a product’s/service’s previous review history and influencing
consumers’ decisions to review that product/service.
Study 3 Predictions
Previous archival investigations exploring the temporal evolution of reviews have found
evidence for previous reviews both increasing and decreasing subsequent review posting volume. In
line with the descriptive findings of these previous studies, I propose the dual-process normative
pressure system, which simultaneously promotes via obligation and prevents via censorship subsequent
to review posting behavior. I have chosen to focus on the effect of [previews reviews] on the
occurrence of subsequent contrarian reviews, because such a behavior should simultaneously involve
both censoring and obligatory social pressures. In regards to obligation pressures, authors’ altruistic
reasons (i.e. obligation) to post should be crowded out by the presence of a previous review, assuming
that reviews are public goods and are altruistically motivated (Sundaram et al., 1998; Hennin-Thurau et
al., 2004). This prediction is based on previous evidence of the crowding out phenomenon, which
occurs when altruistic contributions from a given individual diminish as other parties or individuals
increase their contributions (Abrams & Shmitz, 1978; Andreoni, 1989; Dellarocas & Narayan, 2006).
In regards to censoring pressures, if the previous review(s) are in disagreement with the
opinion of the author who has not yet posted, then they should inhibit her posting intentions, especially
if the authors of the previous reviews are more entitled to their opinions. This prediction is based on the
findings from the psychological standing literature, which show that authors must feel entitled to their
opinion in order to make it public, especially if this opinion violates group consensus (see Miller,
Efforn & Zak, 2008 for review). In other words, by focusing on contrarian review posting behavior,
this experiment was designed to simultaneously test for both the obligation and censorship effects on
posting intentions.
The first objective of this study is to test whether broad social norms of obligation and
censorship are relevant in the consumer review forum context. The second objective is to explore how
the specific review forum community norm favoring positive over negative reviews, documented in the
Epinions data set as well as other large consumer review forums such as Amazon.com and CNET.com
(Hu et al., 2006; Gau & Gu, 2008), interacts with the broad obligatory and censoring social pressures.
Previous work on product level of analyses provides some initial evidence for valence-based asymmetry
in the frequency of subsequent contrarian review posting. In one investigation, Wu and Huberman
(2008) found a higher frequency of contrarian reviews for previously negatively reviewed movies than
for previously positively reviewed ones. In line with this pattern of findings, I expected that when
consumers simultaneously face both obligatory and censoring social pressures, obligation will prevail
over censorship for satisfied consumers’ decisions to post positive reviews, while censorship will
prevail over obligation for dissatisfied consumers’ decisions to post negative reviews.
46
To test for the above predictions, participants of this study imagined themselves in either a
satisfying or dissatisfying service experience, and reported the likelihood that they would post a review
that contradicts previous review(s). I introduced two social contextual factors into the hypothetical
scenario of the study: 1) volume of previous reviews, which increased obligation to post, and 2)
previous authors’ entitlement (i.e. standing) to post. Because the marginal utility of posting (i.e.
influence on aggregated product rating) decreases as the volume of previous reviews increases, I
assumed that fewer previous reviews should increase participants’ obligation to post their review.
Furthermore, based on the findings from the psychological standing literature (see Miller et al., 2008 for
review), previous reviews that are written by more entitled authors (operationalized as having more
product/service experience than the participants), should censor participants from posting contrarian
reviews. By simultaneously decreasing the volume of previous reviews from two reviews (i.e. baseline
condition) to one review (social pressure condition), and increasing the relative entitlement of the
previous reviews’ authors, I engaged participants in a hypothetical scenario which simultaneously
made them feel more obligated to post and more censored from posting, than the scenario presented to
participants in the control condition.
For each valence of experience condition, I crafted a hypothetical scenario where participants
faced elevated levels of obligation and censorship compared to the control condition where they faced
lower levels of obligation and censorship. I interpreted an increase in the posting intentions from the
baseline posting intentions as an indication of obligation dominating over censorship; I interpreted a
decrease in the posting intentions from the baseline posting intentions to mean censorship dominating
over obligation. Given this design, I tested the following hypotheses:
Hypothesis 1: Satisfied participants who are faced with higher obligation and censorship pressures (i.e. in the social pressure condition) will be more likely to post than satisfied participants who are faced with lower obligation and censorship pressures (i.e. in the baseline condition). In other words, satisfied participants’ posting intentions will be more in line with the obligation than the censorship norm. Hypothesis 2: Dissatisfied participants who are faced with higher obligation and censorship pressures (i.e. in the social pressure condition) will be less likely to post than dissatisfied participants who are faced with lower obligation and censorship pressures (i.e. in the baseline condition). In other words, dissatisfied participants’ posting intentions will be more in line with the censorship than the obligation norm.
Study 3 Methods
Participants
Participants were recruited from a web subject pool of a private West Coast university. A
message was sent out to the entire pool, inviting those who have posted at least one consumer review in
the last six months to participate in a short 15-minute study, in exchange for $5. One hundred and thirty
47
participants (67% women) completed the study with the average and median completion time of about
14 minutes. Seven participants were dropped from our analyses for one of the following reasons: 1)
skipping 20% or more of the questions, and 2) spending an extraordinarily short time on the survey,
which might indicate their lack of attention. Removal of these cases left 121 respondents for analysis.
Procedures
Study set-up. Participants read a vignette, in which they were asked to imagine that they
visited a local restaurant for the first time with a friend, after which they logged into an online consumer
review forum such as Epinions, CitySearch or Yelp. To enhance engagement with the vignette,
participants were asked to take a moment and write down what they ordered.
Valence manipulation. Participants were randomly assigned to either imagine that they had a
positive or a negative experience at a local restaurant that they visited for the first time. In the negative
consumer experience condition, participants read the following description of a hypothetical restaurant
visit:
“Now imagine that from bad service to cold and overcooked food you had an exceptionally poor experience in this restaurant. You had to wait ages for your entrees, and when they finally came, they were not exactly the way you ordered them. Your friend, who has strict dietary restrictions, had to give the food back twice until they got it right. And all that with no apologies from any of the staff, who acted the whole time as if they were doing you a huge favor.”
In the positive consumer experience condition, participants read the following description of a
hypothetical restaurant visit:
“Now imagine that from great service to wonderfully prepared food you had an exceptionally great experience in this restaurant. The entrees were perfectly timed, and when served, were exactly the way you ordered them. Your friend had strict dietary restrictions, and the kitchen staff took special care in preparing her entree. Overall, the staff was eager and genuinely happy to do everything possible to provide you with a superb dining experience.”
Social pressure via previous review history manipulation. In the [low social pressure/baseline
condition], participants were told that once they logged into the consumer review forum, they would
discover that this restaurant was reviewed by two other people. If they were in the positive valence
condition, participants were presented with two negative reviews both written by first time patrons (like
the participant). If they were in the negative valence condition, participants were presented with two
positive reviews both written by first time patrons. The complete text of the previous review histories
presented in the [baseline/low social pressure condition] is below:
48
POSITIVE valence condition “Review #1: This is the first time I have been to this place. The food
tastes as if it has been zapped in the microwave before it is brought to your table. I ordered a sandwich which was brought to me with drab lettuce, soggy bread, and a tiny piece of chicken which I literary had to dig for to ensure that it was there. My husband ordered a hot dish which came lukewarm. When I asked them to warm it up, they brought it in 2 minutes, about the time it would take to microwave it. I was really annoyed!
Review #2: I have been there once and my experience was overall poor and unpleasant. I found the service staff to be very rude. Our server visited our table only twice, to take our order and to bring the bill. I literally had to hunt her down for a refill on my water and to get change. It was very unpleasant, but I guess anybody can have a bad day.”
NEGATIVE valence condition
“Review #1: This is the first time I have been to this place. The food bursts with flavor and freshness. I ordered a sandwich which was brought to me with a plump chicken breast, marinated in their finger-lickin’ house sauce, crispy lettuce, and freshly baked bread. My husband ordered a hot dish which filled us up with warmth and was just as delicious when I heated up the leftovers the next day. It was a homerun!
Review #2: I have been there once and my experience was overall outstanding. I found the service staff to be very hospitable. Our server even escorted us to our car with her umbrella to make sure we did not get wet from the rain. I was very pleased, but I guess anybody can have an exceptionally nice server.”
In the high social pressure condition, participants were told that once they logged into the
consumer review forum, they would discover that this restaurant was reviewed by only one other
person. If they were in the positive valence condition, participants were presented with the one
negative review written by a two time patron (more entitled than the participant). If they were in the
negative valence condition, participants were presented with the one positive review written by a two
time patron. The complete text of the previous review histories presented in the high social pressure
condition is below:
POSITIVE valence condition “This is not the first time I have been to this place. I found the service
staff to be very rude and the food tastes as if it has been zapped in the microwave before it is brought to your table. The service and food was poor the previous time as well, but I decided to give them another chance. I ordered a sandwich which was brought to me with drab lettuce, soggy bread, and a tiny piece of chicken which I literary had to dig for to ensure that it was there. My husband ordered a hot dish which came lukewarm. When I asked them to warm it up, they brought it in 2 minutes, about the time it would take to microwave it. The first time we came, our server visited our table only twice, to take our order and to bring the bill. I literally had to hunt her down for a refill on my water and to get change. The first time I thought, “well, anybody can have a bad day,” but the second time I was really annoyed! This is not the way to treat regulars!” NEGATIVE valence condition
“This is not the first time I have been to this place. I found the service staff to be very hospitable and the food bursts with flavor and freshness. The
49
service and food was outstanding the previous time as well. I ordered a sandwich which was brought to me with a plump chicken breast, marinated in their finger-lickin’ house sauce, crispy lettuce, and freshly baked bread. My husband ordered a hot dish which filled him up with warmth and was just as delicious when I heated up the leftovers the next day. The first time we came, our server even escorted us to our car with her umbrella to make sure we did not get wet from the rain. The first time I thought “well, anybody can have an exceptionally nice server,” but the second time was a homerun for me. What a way to treat regulars!”
When comparing the previous review histories of the baseline versus the high social pressure
conditions, obligatory pressures were intended to be higher in the social pressure condition, because the
marginal utility (effect on aggregated product rating) of an additional review increases as the number of
previous reviews decreases. Also, censorship pressures are intended to be higher in the social pressure
than in the baseline condition, because the previous author’s multiple patronage to the restaurant in the
social pressure condition should disentitle the participants, who have only visited the restaurant once. In
the baseline condition, both authors of the previous reviews have only visited the restaurant once, just
like the participants, and thus the previous authors’ reviews should not disentitle the participants from
expressing their opinion. It is also important to note that the volume of information/experience put
forth by the previous review(s) was kept constant across the baseline (2 reviews) and the social pressure
(1 review) conditions.
To summarize, this study was a 2 (Valence of Experience: positive, negative) X 2 (Quantity of
previous reviews: 1, 2) design, where for each valence condition, the decrease in the quantity of
previous reviews from two one-time patrons (baseline condition) to one two-time patron (social
pressure condition) simultaneously increased the strength of participants’ obligation to post and
censorship from posting a contrarian review.
Measures
Posting behavior. Following the manipulations, participants were asked to indicate whether
they would/would not post a review of the restaurant. This was a dichotomous variable.
Felt social pressure. To confirm that a decrease in the quantity of previous reviews from two
one-time patrons to one two-time patron increased felt obligation and censorship from the baseline
condition, participants were asked to report the extent to which they agreed or disagreed with the
following four items: 1) “I feel obligated to post,” 2) “I feel like I really should post,” 3) “I feel
entitled to post,” 4) “ I feel comfortable posting.” For each item, the participants’ ratings ranged from
(1) Strongly Disagree to (7) Strongly Agree. The first two items were aggregated to form the felt
obligation measure (α=.79) and the latter two items were aggregated to form the felt censorship measure
(α =.73).
Valence of experience. To confirm that participants in the negative valence condition recalled
a more negative experience than participants in the positive valence condition, I asked them to recall the
50
valence of the restaurant experience described in the vignette, on a 7-point likert scale anchored at (1)
Extremely Negative to (7) Extremely Positive.
Collective identity with neighborhood. I included a 6-item group identification scale (Ashforth
and Mael, 1988), with participants’ physical neighborhood defining the group in the items. I chose the
neighborhood, because restaurant reviews tend to be written and read by people who are geographically
proximal to each other. The following are two sample items of the scale: “When I talk about my
neighborhood I usually say ‘we’ rather than ‘they’,” and “When someone criticizes my neighborhood,
it feels like a personal insult.” The 6 items were aggregated to form an audience identification
measure (α =.83).
Self monitoring. I administered the revised Lennox and Wolfe (1986) 13-item self-monitoring
scale. The items were aggregated into a single self-monitoring measure (α =.86).
Note, the collective identity and self-monitoring individual difference measures were included
in the data exploration portion of the analysis.
Study 3 Results and Discussion
Manipulation Checks (Full Sample, N=121)
Valence of experience manipulation. An independent samples t-test confirmed that
participants who were presented with a negative experience vignette recalled their restaurant experience
to be less satisfying (M(1.39)=2.89) than participants who were presented with a positive experience
vignette (M(.47)=6.76), t(119)=25.56, p<.0001).
Social pressure manipulation. In designing this study, I assumed that participants would feel
more obligated to post and more censored (i.e. less entitled) from posting when they read one versus
two previous reviews that conflicted with their experience. To validate this assumption, I conducted 2
(Number of reviews: 1, 2) X 2 (Valence: positive, negative) ANOVAs on participants’ felt obligation
and entitlement.
For obligation, the 2 X 2 ANOVA analysis revealed the expected review number main effect,
with participants reporting higher levels of obligation to post their review when there was only one
previous review (M(.18)=5.68) than when there were two previous reviews (M(.17)=4.78),
F(120)=14.79, p=.005. (See Figure 6 for visual display of the pattern.) Note that this effect did not
interact with valence (F<1), suggesting that the strength of the review number manipulation on felt
obligation was not qualified by valence of experience. The valence main effect was also not significant
(F(120)=1.55, p>.2).
For entitlement, the 2 X 2 ANOVA analysis did not reveal a review number main effect,
although the pattern was in the predicted direction with participants reporting lower levels of
entitlement to post their review when there was only one previous review (M(.136)=5.73) than when
there were two previous reviews (M(.13)=6.00), F(119)=1.98, p=.16. There was an unexpected main
valence effect with participants feeling more entitled to post negative reviews ((M(.136)=6.03) than
51
positive reviews ((M(.136)=5.70), F(119)=3.00, p=.086. Interestingly, there was also a significant
interaction (F(119)=4.86, p=.03), such that the expected drop in felt entitlement from two review
conditions to one review condition was only significant for negative reviews (from M(.136)=6.03) to
M(.24)=5.69, t(60)=2.60, p=.012.), but not positive reviews (t<1). (See Figure 7 for visual display of
the pattern.) Hence, while the data confirmed that a previous positive review by a more entitled author
disentitled participants from writing their negative reviews, it did not confirm that a previous negative
review by a more entitled author disentitled participants from writing their positive review. This
partially failed manipulation check obscures the interpretation of the social pressure effect on positive
review posting described below.
52
Figure 6. Felt obligation to post contrarian review (Full Sample, N=121)
Figure 7. Felt Entitlement to post contrarian review (Full Sample, N=121)
53
Hypotheses testing (Full sample, N=121)
Hypothesis 1. Recall my prediction that the frequency of positive review posting will be
higher among participants who are in the one previous review condition (i.e. high social pressure
condition) than among those who are in the two previous reviews condition (i.e. low social pressure
condition). The data did not yield a significant effect of previous review history on positive review
posting (χ(59)<1). Participants in the one previous review condition reported nearly equal positive
review posting frequency (57.1%) as participants in the two previous reviews condition (58.1%).
Hypothesis 2. I also predicted that the frequency of negative review posting will be lower
among participants who are in the one previous review condition than among those who are in the two
previous reviews conditions. In support of this prediction, I found that participants in the 1 previous
review condition reported higher negative review posting frequency (62.1%) than participants in the
two previous reviews condition (78.8%), but this difference is not significant, (χ(62)=2.09, p=.122).
Table 9. Frequency of posting contrarian reviews (Full Sample, N=121)
Would you post a review?Previous Review History NO YES % YES
1 Review: high social pressure condition
11 18 62%
2 Reviews:baseline condition 7 26 79%
1 Review: high social pressure condition
12 16 57%
2 Reviews:baseline condition 13 18 58%
Neg
ativ
e V
alen
ceP
osit
ive
Val
ence
The above findings do not support my overarching prediction that people’s review posting
intentions are more sensitive to censoring social information (a more entitled previous author) when
they are dissatisfied, and more sensitive to obligatory social information (fewer previous reviews) when
they are satisfied. The underlying assumption of the above hypothesis testing was that participants not
only applied the broad social rules of censorship and obligation to their review posting behavior, but
were also motivated to abide by them. The partially failed manipulation checks in this study suggest
that the participant pool may not have been sensitive to the disparity between the obligation and
censorship pressures across the two review history conditions. Recall in my analysis of Epinions
reviews, the patterns in the data which indicated adherence to social pressure were driven by
participants who were both collectively identified with their audience and who had elevated awareness
of and adherence to their social surroundings. In the next section, I excluded participants who scored in
54
the bottom third of either the collective identity or the self-monitoring scale distributions, indicating
their lack of motivation and/or awareness of the social norms regulating posting behavior, and repeated
the above analyses on this subsample.
Data selection
The analyses which I describe below only included participants who were both moderate to
high identifiers, and moderate to high self monitors. This date filter left me with 59 of the original 121
participants.
Manipulation Checks (Subsample, N=59)
Valence of experience manipulation. An independent samples t-test confirmed that
participants who were presented with a negative experience vignette recalled their restaurant experience
to be less satisfying (M(1.76)=2.19) than participants who were presented with a positive experience
vignette (M(.55)=6.68), t(57)=4.49, p<.001).
Social pressure manipulation. In designing this study, I assumed that participants would feel
more obligated to post and more censored (i.e. less entitled) from posting when they read one versus
two previous reviews that conflicted with their experience. To validate this assumption, I conducted 2
(Number of reviews: 1, 2) X 2 (Valence: positive, negative) ANOVAs on participants felt obligation
and entitlement.
For obligation, the 2 X 2 ANOVA analysis revealed the expected review number main effect,
with participants reporting higher levels of obligation to post their review when there was only one
previous review (M(.22)=5.95) than when there were two previous reviews (M(.21)=5.02), F(58)=8.96,
p=.004. Note that this effect did not interact with valence (F<1), suggesting that the strength of the
review number manipulation on felt obligation was not qualified by the valence of experience. In
addition, there was a marginally significant valence main effect, with participants feeling more
obligated to post positive reviews (M(.22)=5.80) than negative reviews (M(.21)=5.16, F(58)=4.29,
p=.072, which is in line with the finding from the Epinions data that readers show a preference of
positive over negative reviews. (See Figure 9.)
For censorship, the 2 X 2 ANOVA analysis revealed the expected review number main effect,
with participants reporting lower levels of entitlement to post their review when there was only one
previous review (M(.21)=5.24) than when there were two previous reviews (M(.21)=5.89), F(58)=5.07,
p=.028. The valence main effect was not significant (F<1), suggesting that across the two review
number conditions, participants did not report feeling more entitled to post a positive than to post a
negative review. Interestingly, there was also a marginally significant interaction (F(58)=2.74, p=.10),
such that the number of reviews’ main effect was driven by the negative valence condition
(M(1.22)=4.89 vs. M(.78)=6.02, t(29)=3.13), p=.004 and was not significant in the positive valence
condition (p>.5).
55
Figure 8. Felt obligation to post contrarian review (Subsample, N=59)
Figure 9. Felt entitlement to post contrarian review (Subsample, N=59)
56
This pattern implies that the intended effect of review number manipulation on censorship only
worked in the negative experience condition. Therefore, while the data confirmed that a previous
positive review by a more entitled author disentitled participants from writing their negative reviews, it
did not confirm that a previous negative review, by a more entitled author, disentitled participants from
writing their positive review. This partially failed manipulation check obscures the interpretation of the
social pressure effect on positive review postings described below.
Hypothesis Testing (Subsample, N=59)
Hypothesis 1. Recall my prediction that the frequency of positive review posting will be
higher among participants who are in the one previous review condition (social pressure condition) than
among those who are in the two previous reviews condition (baseline condition). In support of this
prediction, I found that participants in the one previous review condition reported higher positive
review posting frequency (71%) than participants in the two previous reviews condition (50%), but this
difference was not significant, (χ(31)=2.37, p=.127).
Hypothesis 2. I also predicted that the frequency of negative review posting will be lower
among participants’ who are in the one previous review condition (social pressure condition) than
among those who are in the two previous reviews condition (baseline condition). In support of this
prediction, I found that participants in the one previous review condition reported higher positive
review posting frequency (57%) than participants in the two previous reviews condition (82%), but this
difference was also not significant, (χ(28)=1.35, p=.22).
Table 10. Frequency of posting contrarian reviews (Subsample, N=59)
Would you post a review?Previous Review History NO YES % YES
1 Review: high social pressure condition
6 8 57%
2 Reviews:baseline condition 3 14 82%
1 Review: high social pressure condition
4 10 59%
2 Reviews:baseline condition 7 7 50%
Neg
ativ
e V
alen
ceP
osit
ive
Val
ence
I also explored whether the Valence x Number of Reviews crossover pattern (see Table 10) is
significant. To test for this interaction effect, I conducted the Breslow-Day test of homogeneity of odds
ratio. The test yielded a marginally significant crossover interaction (χ(58)=3.65, p=.056), suggesting
57
that the direction of previous review history effect on frequency of contrarian review posting is
qualified by the valence of participants’ experience.
The analyses of the subsample, consisting of participants who scored moderate to high on the
collective identity and the self-monitoring scales, partially supports my overarching prediction that
people’s review posting intentions are more sensitive to censoring social information (a more entitled
previous author) when they are dissatisfied, and more sensitive to obligatory social information (fewer
previous reviews) when they are satisfied. In the negative experience condition, participants who were
one time patrons of the restaurant simultaneously felt more censored and obligated by one two-times
patron review than two one-time patron reviews, but their behavior was more aligned with their feelings
of obligation than with censorship. In the positive experience condition, participants only felt more
obligated but not more censored by one two-times patron review than two one-time patron reviews.
Hence, I cannot claim that participants’ increase in the positive review posting intentions (from two to
one previous review(s) condition) indicates a dominance of the obligation norm over the censoring
norm, because the latter norm was not activated in the first place.
A major objective of this study was to document an internally valid causal story between
review posting intentions and social pressures derived from previous review history. The experimental
design permitted the manipulation of two variables of the social context (i.e. valence of experience and
previous review history), while holding other antecedents of posting intentions constant. It is important
to note that my valence of experience manipulation is confounded with the valence of previous review
history. When manipulating the valence of participants’ experience, I also manipulated the valence of
the previous reviews, such that in the positive experience condition, the previous reviews were negative
and vice versa. In my interpretation of the data, I attributed the valence effects to the direction of
opposition between the valence of previous reviews and the valence of participants’ experience.
However, because the design is missing the conditions where the previous reviews and participants’
experience are aligned, I can not rule out the alternative explanations that valence of experience and/or
valence of previous reviews independently drive the valence effects.
Along the same lines, my method of manipulating obligation and censorship pressures via
previous review history may have implicated other social influences and may not have analogous social
meaning across the two valence conditions. [In fact, although the number of reviews manipulation was
intended to have the same social meaning across the two valence conditions, the valence X number of
reviews interaction effect on felt censorship doubts the parsimony of the social script implicated in the
vignette.]
Study 4 Predictions
In the next set of analyses, I [collapse across] my review history conditions, and explore how
participants’ strength of relationship with their audience (operationalized by collective identity scores),
and their overall awareness of, and sensitivity to social norms (operationalized by self-monitoring
58
scores), differentially affect their intentions to post opposing reviews. I also explore how these
relationships are qualified by review valence. Although these analyses are exploratory, I propose that
the pattern of relationships will have implications on whether positive and/or negative reviews are
pronormative or antinormative. Specifically, a positive relationship between posting intentions and
either collective identity or self-monitoring, should imply that the behavior is pronormative.
Alternatively, a negative relationship, between posting intentions and either collective identity or self-
monitoring, should imply that the behavior is antinormative. The data from the Epinions data set
suggests that positive reviews are more pronormative than negative reviews. Based on this finding, I
make the following set predictions:
Hypothesis 3a: There will be a positive relationship between collective
identity and positive review posting.
Hypothesis 3b: There will be a negative relationship between collective
identity and negative review posting.
Hypothesis 4a: There will be a positive relationship between self-monitoring
and positive review posting.
Hypothesis 4b: There will be a negative relationship between self-monitoring
and negative review posting.
Furthermore, whether collective identity and/or self-monitoring drive posting intentions will
shed light on the specific mechanism that is driving (inhibiting) the pronormative (antinormative)
posting behavior. In particular, the collective identity scale administered in this study measured
participants’ relationship strength with the likely audience of their reviews (i.e. neighbors). The self-
monitoring scale, on the other hand, measures participants’ awareness of the normative landscape and
the strength of their impression management concerns. Hence, if the data reveals a positive (negative)
relationship between collective identity and pronormative (antinormative) posting intentions, then
participants’ adherence to this posting norm is driven by the altruistic/collective goal of helping their
neighbors. If the data reveals a positive (negative) relationship between self-monitoring and
pronormative (antinormative) posting intentions, then participants’ adherence to this posting norm is
driven by their awareness of the norm and fear of being derogated as antinorm deviants by their
audience. Note that the mechanisms implicated by the self-monitoring scale and collective identity
scale are distinct (i.e. altruism versus norm adherence), but they are not mutually exclusive.
59
Study 4 Results and Discussion
Table 11 provides correlations among the predictor variables included in the analysis.
Table 11. Vignette study correlation table
N 1 2 3 4
1 Self Monitoring 120 .522** 0.126 -.319**
2 Collective Identity 120 .275** -.214*
3 Felt Obligation 120 .122
4 Felt Entitlement 119Correlation is significant at .01 level (2 tailed).Correlation is significant at .05 level (2 tailed).
The findings described below are derived through logistic regression because the dependent
variable (posting intentions) is binary.
Collective identity and posting intentions. When regressing posting frequency on valence of
participant experience and identity, I found a marginally significant valence effect (β = -.676, p=.086).
Follow-up Chi square analysis showed that in the negative experience condition, 71% of participants
indicated that they would post a review, while in the positive experience condition, only 57.6%
indicated that they would post a review. This valence effect should not be perceived as conflicting with
the patterns of findings found in the Epinions data set because the former effect was documented for
contrarian reviews in particular, while the latter included all reviews, regardless of their relative valence
to the valence of the previous reviews.
Next I regressed posting frequency on identity, valence, as well as their interaction. This
model yielded a significant interaction (β = .675, p=.044), but no significant main effect of collective
identity on posting. Follow-up analysis supported hypothesis 3a but not 3b. Specifically, the
interaction was driven by identification increasing the likelihood of positive review posting (β = .606,
p=.016), but not decreasing the likelihood of negative review posting (β = -.069, p>.5). (See Figure 10
for a visual representation of the interaction.) This interaction pattern suggests that the positive social
pressure to post positive reviews is stronger than the social pressure to post negative reviews.
60
Figure 10. Relationship between collective identity and contrarian review posting occurrence
To verify that felt obligation accounts for the relationship between collective identity with the
audience and positive review posting intentions, I conducted mediation analysis with felt obligation,
which was found to be strongly correlated with collective identity ((β = 2.77, p<.0001) as the mediator.
When I regressed positive posting intentions on both identity and felt obligation, the identity effect was
reduced from β = .606, p=.016 to β = .408, p>.1, but this mediation test was only marginally
significant, Sobel t(.67)=1.61,p=.10. (See Figure 11 for summary of mediation results.) I also ruled
out an alternative account that the relationship between collective identity and positive posting
intentions was driven by entitlement to post. In fact, for positive reviews the relationship between
collective identity and felt entitlement was not significant (p>.4).
Figure 11. Path Analysis: Identity Obligation Positive posting
Identity +Posting
Obligation
c(.251), =.606, p=.016
c’(.273), =.408, p>.1
a(.7) =2.77, p<.0001
b(.224) =.394, p=.078
61
Self-monitoring and posting intentions. When regressing posting frequency on self-
monitoring, experience valence, as well as their interaction, the model yielded a marginally significant
valence effect (β = -.739, p<.0001), significant self-monitoring effect (β = -.912, p=.011) and a
significant interaction (β = .916, p=.044). First, the negative relationship between self-monitoring and
posting intentions suggests that impression management concerns inhibited participants from posting
reviews. In line with hypothesis 4b, follow-up analysis of the interaction revealed that self-monitoring
decreased the likelihood of negative review posting ((β = -.912, p=.011), but not positive review
posting ( p>.5). (See Figure 12 for a visual representation of the interaction.) This interaction pattern
suggests that the censoring (negative) social pressure to post negative reviews is stronger than the
censoring social pressure to post positive reviews.
Figure 12. Relationship between self-monitoring and contrarian review posting
Next, I explored whether felt entitlement accounts for the negative relationship between self-
monitoring and negative review posting intentions. I also conducted mediation analysis with felt
entitlement, which was negatively correlated with self-monitoring (β =-.624 p<.0001) as the mediator.
When I regressed negative posting intentions on both self-monitoring and entitlement, the self-
monitoring effect was significantly reduced from β = -.912, p=.011 to β = -.305, p>.5, Sobel
t(.34)=2.47,p=.01. (See Figure 13 for summary of mediation results.) I also ruled out an alternative
account that the negative relationship between self-monitoring and negative posting intentions was
62
driven by obligation to post. In fact, for negative reviews the relationship between self-monitoring and
felt obligation was not significant (p>.5).
Figure 13. Path analysis: Self-monitoring Entitlement Negative Posting
To summarize, the above exploratory analysis confirms that positive reviews are public good
motivated by authors’ relationships with their audience. Negative reviews, on the other hand, are
antinormative, inhibited by impression management concerns. Interestingly, despite evidence for the
social script that obliged participants to post reviews about their positive experiences, while inhibiting
them from posting reviews about their negative experiences, I found that participants, overall, revealed
stronger intentions to write negative than positive reviews. At first glace, this valence may contradict
the findings in the Epinions data set, where I found positive reviews to be more frequent than negative
reviews. The disparity in posting intentions documented in this study, and actual online review posting
behavior observed on Epinions, may arise from a variety of important differences across the two
studies. First, the disparity in the valence effect across the two data sets may be due to participants of
the vignette study operating within a more nuanced and/or specific normative script, brought on by the
previous restaurant review history which they were required to read. In a natural setting, people do not
always read previous reviews and even those who do read previous reviews, do not always face
situations where the previous review(s) contradict their experience of the product/service. Situations
where one’s consumer experience conflicts with other consumers’ experiences may bring about
concerns about the validity of one’s experience, which may be qualified by valence. Indeed, because
negative experiences tend to be more salient and convincing (see Peters & Czapinski, 1990; Mittal,
Ross & Baldasare, 1998 for review) than positive experiences, people’s confidence about their positive
consumer experiences may be more fragile than their confidence about their negative consumer
experiences, reducing the overall positive experience posting rate and increasing the overall negative
experience posting rate
Second, Epinions participant pool may have been more sensitive to social pressures than the
participants in this study, due to disparity in the sampling procedures across the two studies. The
Epinions data set only included active members who have logged into Epinions at least once in the last
three months. In contrast, the present data set recruited people who posted at least one review in the last
Self Monitoring - Posting
Entitlement
c(.36), =-.912, p=.011 c’(.46), =-.305p>.5
a(.144) = -.624 , p<.0001
b(.443) =1.33, p=.003
63
six months on any of the forums available online. Thus, although participants in the vignette study
signaled awareness of the obligation to post positive reviews and censorship from posting negative
reviews, they may not have been as motivated to adhere to these pressures as the active Epinions
members.
Another important difference across the two studies is that while the Epinions data set included
reviews on a wide range of products, the present study focused on reviews of one specific service. This
is a relevant difference because previous work has shown that for services, the dominating motive for
WOM communication is reciprocating a good turn to the service provider. Specifically, negative
reciprocation motive (vengeance) tends to be more powerful than positive reciprocation (help the
company) motive for eWOM activity (Sundaram et al., 1998), which is in line with the unexpected
valence effect documented in this investigation.
Summary
To recap, this vignette study provided two forms of evidence for reviews as speech acts. First,
the positive relationship between collective identity and positive review posting, and the negative
relationship between self-monitoring and negative review posting, suggest that reviews are a product of
social motives such as obligations to the audience, and impression management concerns. These results
have limited generalizability due to the specificity of the social context created by the experimental
manipulations as well as the specific (i.e. contrarian) posting behavior under investigation.
Second, the shift in behavioral intentions to post, resulting from previous review history,
supports the notion that reviews are interdependent speech acts, rather than isolated bursts of self
expression. The specific findings, however, were not fully aligned with the normative script implicated
in the hypotheses. Even when I excluded participants who scored in the bottom third of either the
collective identity or self-monitoring scales, my hypotheses were only marginally supported. The
failure to detect significant results may have been an artifact of the vignette method, which works under
the assumption that participants become psychologically engaged in the hypothetical situation and their
behavioral intentions are aligned with actual behavior. In fact, the partially failed manipulation checks
question the strength of participant engagement and/or level of consensus on the specific normative
script governing contrarian review posting behavior. Furthermore, the normative script driving
contrarian reviews may be more complex than the one implicated in the predictions. Also, the fact that
the review history in natural settings is multi-dimensional and continuous was operationalized as uni-
dimensional and categorical. This simplification of the social context further limits the generalizability
of the findings.
While there may be a disconnect between behavioral intentions within a hypothetical scenario
and actual review posting behavior, the relationship between participants’ altruistic intentions,
impression management concerns and their behavioral intentions alleviates the concern of this
ubiquitous flaw of the vignette design. Furthermore, the fact that participants’ posting intentions were,
64
at least in part, a product of direct manipulation of the previous review history warrants larger-scale
field investigations of reviews as speech acts embedded in conversation.
65
CHAPTER 5
Introduction
The previous studies provide a variety of compelling evidence for social norms shaping
consumer posting behavior. While these studies varied in their methods, behavioral measures, and
operationalization of social norms, they all focused on the social pressures that arose from review
authors’ relationship with their audience. Another relevant relationship that can shape review posting
behavior is the authors’ relationship with the producer/service provider. Previous investigations of
consumers’ motives for engaging in WOM concede on the importance of reciprocity motive in the form
of either rewarding the service provider with public praise or punishing him with public criticism (see
Henning-Thurau et al., 2004 for review). Interestingly, a study that explored the differences between
the motivational structure of positive and negative WOM behavior found that participants reported
negative reciprocity motives with higher frequency than positive reciprocity motives (Sundaram, 1998).
This implies that the reciprocity motive can produce an opinion expression bias in the review valence
distribution, favoring negative over positive reviews. The overall stronger negative than positive
posting intentions in the vignette study (Chapter 4) may have been driven by this valence disparity in
the reciprocity motive, although there was no way to test for this effect. In the present investigation, I
track both the strength of the relationship between the authors and their audience and the strength of
relationship between the authors and the service provider.
The key argument put forth throughout this dissertation is that social motives drive consumers’
decision to post a review. Thus far my operationalizations of this measure were flawed. In the
Epinions archival data set (Chapter 2), the actual decision to post was unobservable; in the Epinions
survey (Chapter 3) the decision to post was based on self reports which are sensitive to memory biases;
and in the vignette study (Chapter 4), the decision to post was hypothetical. The present investigation
was designed with the main objective to directly observe review posting rates, by not only tracking the
occurrence of reviews, but also the complete pool of naturally occurring consumer experience, from
which these reviews came about. Specifically, I first collected consumers’ (i.e. patients’) level of
satisfaction with a service experience (i.e. patient experience in a dental office chain) and tracked their
subsequent review posting behavior. In this way, I derived a true experience posting rate. In Study 1, I
tested for the presence of valence-based posting biases by comparing the valence distribution of
consumers’ satisfaction with the valence distribution of the resulting consumer reviews. In Study 2, I
explored how consumers’ valence of experience affected the content of their subsequent reviews.
Study 5 Predictions
The first objective of this study was to explore how consumers’ relationship strength with their
audience and consumers’ relationship with the service provider differentially affects positive and
negative review posting rates. Previous research found that consumers often engage in WOM in order
to punish and/or reward the service providers. Dichter’s seminal work (1966) identified reciprocity as
66
one of the four main motives for engaging in WOM communication. Since then, more recent
ethnographic work of WOM behavior confirmed the relevance of this motive via consumers’ self
reports (Sundaram et al., 1998, Engel et al., 1993). However, within the context of online product
reviews in particular, reciprocating the produced was not a frequently mentioned antecedent (Henning-
Thurau, et al., 2004). It is important to note that the present study investigates service reviews, for
which the reciprocity motive may be more relevant than for product reviews, because a typical service
experience has a stronger human interaction component than a typical product experience. It is
therefore worthwhile to explore the role of the reciprocity motive (deriving from relationship strength
with the service provider) in consumers’ decision to post a service review. Specifically in this field
study, relationship strength between the consumer and the service provider is operationalized by the
amount of time the participants have been patients of the dental office.
Hypothesis 1: Patients who are more tenured will be more likely to post reviews about their experience than their less tenured counterparts. In other words, tenure will increase the odds of posting. Recall the Sundaram et al., (1998) findings that the negative reciprocity motives (i.e. punishing
the service provider) for WOM behavior are stronger than the positive reciprocity motives (i.e.
rewarding the service provider). This is in line with a wide body of work that show that negative
information is more perceptually salient than positive information, is given more weight than positive
information, and elicits a stronger physiological response than positive information (Peters &
Czapinski, 1990; Mittal et al., 1998). I predicted that the above tenure effect on odds of posting will
interact with the valence of patients’ experience.
Hypothesis 2: The tenure effect on odds of posting will be stronger for patients who had a positive experience than for those who had a negative experience at the dental office. I also tested the effect of norm conformity on posting behavior, by examining how the strength
of consumers’ relationship with their audience differentially affects the occurrence of positive and
negative review posting rates. While I did not include a direct measure of consumers’ relationship
strength with their audience, I introduced a manipulation of this variable into the experimental design
by specifying the identity of the audience to be either friends or new neighbors. Based on previous
findings in the Epinions data that show that positive reviews are both more prescriptively and
descriptively pronormative than negative reviews, I expected patients’ relationship strength with their
audience to simultaneously increase their odds of posting a review about their positive experience and
decrease their odds of posting a review about their negative experience.
Hypothesis 3: Patients whose audience consists of friends will be more (less) likely to post positive (negative) reviews than patients whose audience consists of new neighbors. In other words, valence of patients’ experience will qualify the effect of relationship strength
with audience on patients’ odds of posting. Note, the test of this hypothesis allowed me to realize
67
another objective for this study, which was to explore whether relationship strength with the audience
will yield a similar pattern of results as collective identity did in the studies described in Chapters 3 and
4. This is because both collective identity and relationship strength with the audience elevate
impression management concerns and communal intentions, which are the driving factors of adherence
to the behavioral posting norm favoring positive over negative reviews.
Study 5 Methods
Setting
This field study was conducted at three locations of a small dental office chain in southern
California. Phase I of the study was conducted over a 10-day period, although each office location only
collected data for 5 days; and Phase II of the study was conducted over a 14-day period. There was also
a delay between the two phases that ranged from 9 to 19 days.
Procedures (Phase I)
All adult patients who came in for their appointment on the days of testing were asked to fill
out a short satisfaction questionnaire. Following their treatment, the dental assistant or office manager
handed them the satisfaction survey which was in a form of a fold-over card and left the treatment room
in order to provide privacy for the patients while they filled out the survey. The front of the card
instructed patients to fill out the survey, and drop it in the slit of a box marked “Completed Surveys”
located in the front office. (The box was taped in several places to discourage tampering by the staff.)
Patients were reassured that their responses will be kept confidential from the staff. Note, patients were
instructed to provide their email address which was used to contact them in Phase II of the study and
merge the two phases of data collection. Once the data were merged, the email addresses were
permanently removed from the data file. To ensure a high response rate, patients were entered into an
iPod drawing for completing the survey. The questionnaire, took 1-3 minutes for patients to complete.
Measures (Phase I)
Satisfaction. Patients were instructed to report on their level of satisfaction with eight
components of their patient experience: overall experience, quality of dental care, staff demeanor,
doctor, value, timeliness, ambiance, and telephone service. For each component, patients indicated on a
7- point likert scale their level of satisfaction from (1) Very Unsatisfied to (7) Very Satisfied. The eight
satisfaction ratings were highly intercorrelated (alpha=.84), and were averaged to form a single measure
of satisfaction.
Patient Profile. Patients reported how long they have been patients at the practice (patient
tenure) and the number of other family members who were also patients of the practice (family
member).
68
Procedures (Phase II)
Nine days following the last Phase I data collection day, participants were contacted by the
dental practice via email about the results of the iPod drawing. I used this opportunity to solicit them to
participate in Part II of the study. Participants were invited to log into a web study, run by a reputable
West Coast university, that was designed to mimic a typical consumer review composition page. Once
they logged in, they were reassured that their review of the practice would be anonymous and not
attached to any identifying information. They were then asked to provide their email address, and write
a review of the dental office. Once their email addresses were used to match the satisfaction data from
Phase I with the review data from Phase II, their emails were permanently deleted from both the
original and merged files and replaced with a randomly generated ID number.
Relationship Strength Manipulation. In the solicitation emails, participants were assigned to
one of two conditions. In the high relationship strength condition they were asked, “How would you
review this dental practice if your friend was going to read your review?” In the low relationship
strength condition, they were asked, “How would you review this dental practice if someone who
recently moved into the neighborhood was going to read your review?” Across both conditions, they
were instructed to click on a link provided in the email to write their review of the dental practice.
Measures (Phase II)
Review Rating. The patients who chose to write reviews also rated the dental practice on a 5-
point likert scale from (1) Very [poor] to (5) Excellent, and then wrote their review in the provided text
area. The rating measure served as a memory check to ensure that reviews reflected patients’
satisfaction ratings at the time of their visit to the dental practice. In addition, a few of the patients had
follow-up visits between the testing days and the day they wrote their review, which may have shifted
their overall satisfaction with the dental practice.
Review Length. Review length was measured by the total word count of each review.
Debriefing
After completing their review, participants continued to the next page, where they were
debriefed about the purpose of this research and reassured that the review that they wrote would not be
posted on any consumer review forum and would not be attached to their identity in the dataset.
Study 5 Results and Discussion
Sample attrition and response rates
Following each day of Part I data collection, the number of surveys collected at each office
were counted by the researcher. In addition, the office staff provided the total number of adult patients
who visited the office on the days of data collection. To [calculate a response rate, the total number of
69
completed surveys was divided by the total number of adult patients who visited the office on the days
of data collection. Of the 481 adult patients who visited the offices, 28 patients did not complete the
survey. As a result, the response rate across all days and across all three offices was 94% and did not
vary significantly across offices. Next, an additional 11% or 49 surveys with invalid or duplicate
emails were removed. Therefore, of the 481 patients who visited the office, 404 or 84% were solicited
to write a review of the office. (See Table 12 for sample attrition from Phase I to Phase II.)
Table 12. Patient sample attrition rate between Phase I and II of data collection
Final Phase II No survey or email Bad/ Duplic. email Sample Size
1 156 9 15 1322 198 12 18 1683 127 7 16 104
481 28 49 4045.82% 10.61% 83.99%PERCENT
Loc
atio
n
Total Patients who Visited the office
Attrition 16.43%
TOTAL
Of the 404 who were solicited, 88 wrote a review. Of those 88 reviews, 7 were excluded
because they consisted of less than 4 words (4) and/or made no sense (3) and/or the review did not
match their initial satisfaction levels (3). Eleven more reviews (3 positive and 8 negative) were
excluded because it was determined through the coding that the audience of those reviews was the
dental practice (e.g. doctors, staff, administrators, etc.) rather than fellow consumers. Finally, 9
reviews/posts (2 Positive, and 7 negative) from Part II could not be matched to data from Part I. This
left me with 63 reviews that were included in the analysis below.9 Occurrence of post was determined
by matching emails of satisfaction surveys from Phase I with emails of reviews from Phase II. The
review rates did not vary significantly across offices or days.
9 Some reviews were excluded for more than one of the above mentioned reasons. There was no significant difference in valence
distribution of excluded reviews from the included reviews.
70
Descriptive Statistics
Table 13 provides the descriptives of the variables included in the analysis, as well as the
correlations among them.
Table 13. Descriptives and correlations of dental patients study
N Mean SD 1 2 3 4 5
1 Review Rating (Valence) 63 2.92 1.88 0.203 0.049 0.234 .943**
2 Patient Tenure 390 2.51 2.52 0.27 -0.045 .139**
3 Review Length 63 120.9 78.04 -0.019 0.027
4 Family Members 393 1.78 1.55 0.019
5 Patient Satisfaction 393 5.17 2.15**Correlation is significant at .01 level (2 tailed).*Correlation is significant at .05 level (2 tailed).
DESCRIPTIVES CORRELATIONS
Patient satisfaction. Figure 14 displays the distribution of patients’ overall satisfaction with the
dental practice. Note that the distribution is positively skewed, (M(2.14)=5.17), with 50% of patients
with scores of 6 or above, and the mode of distribution located at the maximum of the range (i.e. 7).
Thus, I used the log of the satisfaction scores in the analysis described below.
Figure 14. Dental patient satisfaction distribution
7.006.005.004.003.002.001.00
Average Satisfaction
125
100
75
50
25
0
Fre
qu
ency
Mean = 5.1785Std. Dev. = 2.14094N = 404
Histogram
Patient Profile. The average length of time that the patients were patrons of the dental office
was M(2.51)=2.51 years, with a range of 0 to 15 years. The average number of their family members
who were also patients of the office was M(1.55)=1.77, with a range of 0 to 7 members. Interestingly,
71
these two profile measures are not correlated with each other (r(390)=-.045, p>.3). Even though one
would initially expect tenure and family members to be intercorrelated because they are both indicative
of commitment, the family lifecycle may disengage this relationship. At the inception of the family,
tenure and family members may both reinforce the patients’ commitment to the dental office; however,
over time, while tenure may continue to grow and reinforce commitment, family member volume will
inevitably decrease as children leave their parents’ households, and/or parents die. Despite, the lack of
a linear relationship between tenure and quantity of family members, I chose to include family members
as a control in the hypothesis testing model, due to its theoretical relevance.
Review Valence (Ratings). Three patients were excluded from the analysis, because their
ratings of the dental office at the time of the review were the opposite of their satisfaction ratings
immediately following their appointment. After the exclusion of these patients, the relationship
between patients’ ratings and satisfaction scores were highly correlated (r(63)=.94), implying that most
patients’ reviews were consistent with their satisfaction at the time of the appointment. Note that the
valence distribution of reviews is bimodal at the extremes (See Figure 15).
Figure 15. Distribution of dental patients’ valence of reviews
Interestingly, while less than 20% of the patients reported that they were mildly to extremely
dissatisfied with their experience (see Figure 12) at the dental office, nearly half of all reviews posted
were negative (see Figure 13). The difference between the review valence distribution and the
Review Rating
72
patients’ satisfaction distribution suggests that there was an opinion expression bias favoring negative
over positive reviews.
Review Length. After excluding 4 participants whose review were less than 4 words in length,
the average word count of the reviews included in the sample was M(78.04)=120.89.
Posters vs. Nonposters. I explored how the patients who posted reviews (n(p)=63) differed
from their non-posting counterparts (n(np)=330). T-tests (see Table 13) revealed that patients who
wrote reviews were less satisfied (t(391)=4.55, p<.001), more tenured (t(388)=3.23, p=.001), and had
more family members as patients in the practice (t(391)=2.98, p=.003) than patients who did not write
reviews. The difference between posters’ and non-posters’ satisfaction levels implies an expression
bias favoring negative over positive experiences. Also, the fact that posters were more tenured than
non-posters provides initial support for hypothesis 1, predicting that tenure would increase the odds that
the patient would post a review about his experience.
73
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
Sati
sfac
tion
-0.3
320.
066
<.00
01-0
.38
0.09
4<.
0001
-0.5
050.
103
<.00
01-0
.322
0.06
5<.
0001
-0.3
520.
069
<.00
01
Rel
at S
tren
gth
0.50
60.
302
0.09
40.
178
0.54
5>.
700.
623
0.58
2>.
200.
502
0.30
80.
103
0.21
80.
550.
693
1.34
30.
477
0.00
5
Ten
ure
0.21
60.
054
<.00
010.
224
0.05
5<.
0001
0.23
0.05
6<.
0001
0.25
10.
059
<.00
010.
619
0.16
9<.
0001
0.16
30.
064
0.01
1
Fam
ilyM
em0.
323
0.09
50.
001
0.35
60.
098
<.00
010.
332
0.09
60.
001
0.32
60.
202
0.10
60.
445
0.12
80.
001
Sat
X R
elat
Strn
gth
0.66
60.
389
0.08
70.
931
0.41
50.
025
Sat
X T
enur
e-0
.078
0.03
20.
013
-0.0
820.
032
0.01
2
*Mod
el fo
r D
issa
tisfie
d pa
tien
ts
^M
odel
for
Satis
fied
patie
nts
Mod
el 5
Mod
el 8
*M
odel
9^
Mod
el 1
Mod
el 2
Mod
el 3
Mod
el 4
Tab
le 1
4. M
odel
su
mm
ary
pred
ictin
g de
nta
l pat
ien
ts’
revi
ew p
ostin
g ra
te
Tab
le 1
3. C
ompa
riso
n of
den
tal p
atie
nts
who
pos
ted
vers
us th
ose
wh
o op
ted
out o
f po
stin
g
Rev
iew
NM
ean
SDt
dfp
Abs
ent
330
5.38
2.04
Pres
ent
634.
072.
36
Abs
ent
327
2.33
2.39
Pres
ent
633.
442.
91
Abs
ent
330
1.68
1.52
Pres
ent
632.
31.
56
DE
SCR
IPT
IVE
S
Pat
ient
Sat
isfa
ctio
n4.
55
T-T
EST
391
<.0
001
Pat
eint
Ten
ure
-3.2
338
80.
001
Fam
ily M
embe
rs2.
9839
10.
003
74
Hypothesis testing
Because the main dependent variable was binary (i.e. Review: presence vs. absence) the
predictions were tested with binary logistic regression. (See Table 14 for Models Summary.)
Hypotheses 1 & 2.
I predicted that patients’ tenure in the dental office will increase their odds of posting a review,
especially when they were dissatisfied with their patient experience. In line with hypothesis 1, when
regressing review occurrence on tenure, satisfaction, and tenure x satisfaction interaction, while
controlling for family members, I found that more tenured patients were more likely to post a review
than their less tenured counterparts (β(.06)=.251, p<.0001). In addition, the data yielded a significant
negative satisfaction effect (β(.07)=-.35, p<.0001), such that dissatisfied patients were more likely to
post a review than their satisfied counterparts. In line with hypothesis 2, satisfaction also qualified the
positive tenure effect (β(.03)=-.08, p=.012) on odds of posting, such that the tenure effect was stronger
for patients who were dissatisfied with their experience than those who were satisfied. As Figure 16
demonstrates, the positive relationship between tenure and odds of posting was stronger for the most
dissatisfied patients (i.e. those who scored in the bottom third of the distribution, β(.169)=.619,
p<.0001) than satisfied ones (i.e. those who scored in the top third of the distribution (β(.064)=.163,
p=.011). This interaction pattern supports the notion that patients’ intentions to retaliate in response to
a negative experience by posting a negative review is stronger than their intentions to reward a positive
service experience by posting a positive review.
75
Figure 16. Relationship between patients’ tenure with dental office and odds of posting10
Hypothesis 3. I also predicted that patients whose audience consists of friends will be more
likely to post positive reviews and less likely to post negative reviews than patients whose audience
consists of new neighbors. This hypothesis was based on the assumption that positive reviews are
pronormative and negative reviews are antinormative. When regressing review occurrence on
relationship strength, satisfaction and relationship strength x satisfaction, while controlling for family
members, I did not find that patients are significantly more likely to post reviews for an audience
consisting of friends than for an audience consisting of new neighbors (β(.582)=.623 , p>.2), although
the pattern was in the predicted direction. However, there was a significant relationship strength x
satisfaction interaction (β(.415)=.931, p=.041), although it did not resemble the crossover pattern
predicted in hypothesis 3 (see Figure 15 for visual display of interaction pattern). In line with
hypothesis 3, satisfied patients (i.e. those whose scores ranges from 1-2.5 on a 7- point likert scale)
were nearly three times more likely to post for friends(15%) than for neighbors (5%) , χ=7.39, p.=005
(β(.477)=1.343, p=.005); however, relationship strength did not qualify posting rates of dissatisfied
patients (i.e. those whose scores ranged from 5.5-7 on a likert scale), χ<1 (β(.55)=.218, p>.6).
10 In order to visually demonstrate how patients’ valence of experience qualifies the relationship between tenure and odds of posting, I converted the continuous patient satisfaction variable into a three-level categorical variable.
76
Figure 17. Relationship between valence of satisfaction and review posting rate
My original crossover pattern prediction was that positive reviews are pronormative and
negative reviews are antinormative. Assuming that relationship strength with their audience drives
patients’ norm conformity, the pattern in the data supports the notion that positive reviews are more
pronormative than negative reviews, but not that negative reviews are more antinormative than positive
reviews. A comparison across the two audience relationship strength conditions provides evidence for
positive normative pressure (i.e. obligation) to posit positive reviews, but no evidence for negative
normative pressure (i.e. inhibition) to post positive reviews.
The overarching argument in this dissertation is that valence-based differences in the social-
motivational structure of review posting behavior imply that positive and negative reviews are
phenomenologically distinct. The above hypothesis testing implies that obligation to the audience is a
stronger predictor of positive than negative review occurrence, while the desire to retaliate is a stronger
predictor of negative than positive review occurrence. Given this finding, I would expect to find
differences between the characteristics of positive and negative reviews, other than simply the valence
of their content. While the purpose of study 1 was to address the question of how positive and negative
reviews come about, the purpose of study 2 is to address the question of how positive and negative
reviews are different.
77
Study 6 Purpose
Based on the above evidence that positive and negative reviews are posted for distinctly
different social reasons, I expected to find differences in content and structure as a factor of reviews’
valence. Specifically, I explored whether reviews vary in content, length, extent to which they are
logical, and extent to which they are emotional as a factor of the valence of the review. Another
objective of this study was to investigate how review length was affected by patients’ tenure with the
dental office, the valence of their experience, and the relationship strength with their audience. It is
important to note that the review sampling (i.e. review occurrence) was not independent of tenure,
experience, and relationship strength with audience; as a result these variables, when predicting word
count, suffer from a limited range problem.
Study 6 Methods
Review sample
Recall that of the 404 patients who were solicited to review the dental office, 88 patients wrote
reviews, 7 were excluded because their reviews consisted of less than 4 words (4) and/or made no sense
(3) and/or the review did not match their initial satisfaction levels (3). Eleven more reviews (3 positive
and 8 negative) were excluded because it was determined through the coding that the audience of those
reviews was the dental practice (e.g. doctors, staff, administrators, etc.) rather than fellow consumers.
Finally, 9 reviews/posts (2 positive, and 7 negative) from Part II could not be matched to data from Part
I. This left me with 63 reviews that were included in the analysis below.11
Measures
Review Length. A word count was conducted on each review to determine its length. The
range of review length was between 18 and 388 words, with mean of 120 words, standard deviation of
78 words, and median of 93 words. (See Figure 18 for distribution of review length.)
11 Some reviews were excluded for more than one of the above mentioned reasons. There was no significant difference in valence
distribution of excluded reviews from the included reviews.
78
Figure 18. Length distribution of dental office reviews
Content Themes. Two coders blind to patients’ dental office experience coded all review on
whether the authors mentioned the following themes: doctors/staff, dental work, price/cost, trust, and
physical pain. The two coders’ agreement level on these categories ranged from Kappa=.66 to .83. A
third coder resolved the discrepancies. See Table 15 for examples of content and coder agreement
levels for each theme.
79
Fre
quen
cyN
ame
Exa
mpl
es#
of L
evel
sK
appa
pP
rese
ntβ
S.E
p
Doc
tor/
Staf
f"T
his
dent
ist c
omes
fro
m th
e to
p de
ntal
pro
gram
an
d is
ver
y ge
ntle
"2
Abs
ent (
0)Pr
esen
t (1)
0.82
9<
.000
182
.50%
0.10
80.
145
>.4
0
Den
tal W
ork
"the
col
or o
f th
e ve
neer
doe
s no
t mat
ch m
y te
eth"
2A
bsen
t (0)
Pres
ent (
1)0.
573
<.0
001
69.8
0%-0
.021
0.11
7>
.80
Pri
ce/C
ost
"His
pri
ces
are
extr
emel
y co
mpe
titi
ve"
2A
bsen
t (0)
Pres
ent (
1)0.
721
<.0
001
34.9
0%0.
217
0.11
90.
067
Tru
st"I
ts a
hug
e re
lief
to b
e ab
le to
trus
t som
eone
who
w
ill b
e dr
illi
ng in
you
r m
outh
"2
Abs
ent (
0)Pr
esen
t (1)
0.41
8<
.000
246
%0.
020.
108
>.8
0
Phy
sica
l pai
n"T
he n
umbi
ng s
hot w
as v
ery
pain
ful"
2A
bsen
t (0)
Pres
ent (
1)0.
66<
.000
128
.60%
-0.0
240.
119
>.8
0
Mea
ning
of
Lev
elC
onte
nt T
hem
esC
odin
g in
stru
ctio
nsC
oder
Agr
eem
ent
Rel
atio
nshi
p w
/ Rev
iew
Val
ence
Nam
eC
oder
inst
ruct
ions
Ran
ge α
pr
Mea
nSD
Val
ence
Rat
e th
e ex
tent
to w
hich
this
rev
iew
was
pos
itiv
e or
neg
ativ
e to
war
d th
e de
ntal
off
ice.
-3…
…+
3ex
trem
ely
nega
tive
extr
emel
y po
sitiv
e 0.
927
<.0
010.
868
0.13
2.21
8
Em
otio
nalit
yR
ate
the
exte
nt to
whi
ch th
is r
evie
w w
as la
ced
wit
h em
otio
ns.
1……
.7no
t at a
ll em
otio
nal
extr
emel
y em
otio
nal
0.57
6<
.002
0.41
33.
31.
444
Log
ical
ity
Rat
e th
e ex
tent
to w
hich
this
rev
iew
incl
uded
a
logi
cal a
rgum
ent f
or th
e pa
tien
t's o
pini
on.
1……
.7no
t at a
ll lo
gica
lex
trem
ely
logi
cal
0.54
7<
.003
0.37
74.
141.
268
Des
crip
tive
sC
oder
Agr
eem
ent
Dim
ensi
onA
ncho
rsSc
ale
Tab
le 1
6. D
escr
ipti
ves:
Den
tal o
ffic
e re
view
dim
ensi
ons
Tab
le 1
5. D
escr
ipti
ves:
Con
ten
t th
emes
of
den
tal o
ffic
e re
view
s
80
The subject most frequently written about was dentists and the staff of the dental office.
Eighty-three percent of reviews discussed the characteristics and/or behavior of dentists and staff.
Seventy percent of reviews included discussions of the dental work, 46% of reviews mentioned
(dis)trust, 35% of reviews mentioned price/cost of the dental procedures, and only 28% of reviews
mentioned pain or lack thereof.
The coders also rated the content of each review on the following three dimensions.
Valence. Coders rated each review on whether its content was positively or negatively
valenced on 7-point likert scales, anchored at (-3) extremely negative, and (3) extremely positive.
These ratings were used as a check to ensure that the valence of reviews matched patients’ satisfaction
with experience (as they reported it at the time of their visit to the dental office). Three patients’
valence of reviews contradicted their satisfaction ratings at the time of their visit. This may have been
due to events that transpired between the original visit and occurrence of the review, which was not
tracked during the study. Hence, these patients were excluded from the analyses of both studies [5 and
6.] After this exclusion, the correlation between the valence of patients’ satisfaction ratings at the time
of their visit to the dental office and the valence of their review of the dental offices was r(63)=.94 , p<
.0001.
Emotionality. Coders rated the extent to which each review’s content was emotional (i.e. had
emotional undertones) on a 7-point likert scale, anchored at not at all emotional (1), and (7) extremely
emotional. The two coders’ scores for each dimension were collapsed (averaged) into a single score for
each review. Examples of reviews that were coded as extremely emotional and not at all emotional are
below:
Extremely emotional: “If you enjoy spending a lot of money for pain ... this is the place for you. They pulled a tooth on my minor child without any numbing gel. They prepared a new crown that looked more like a blob of gum rather than a tooth. Finally 2 years later after multiple redos of this work, I can finally leave. They have assistants doing work that typically dentists do. The only nice thing about this place is the office facility. Unfortunately, the pleasantness of the front office is only a "cover" for it being a dental mill. I am sharing my story so that others will not have to endure the torture I went through. If you value your teeth, STAY AWAY!”
Not at all emotional: “Probably the best dentist I've ever gone to, and
I'm the type to go to a new dentist every time I need a check-up or get rid of a cavity. What can I say, I like convenience! I think this is one dentist I'll definitely be coming back to. He was super informative, gentle, and walked me through the entire process. I usually hate the dentist, but the staff really made my experience a pleasant and comforting one.”
Logicality. Coders rated the extent to which each review’s content was logical (i.e. included a
logical argument for author’s opinion) on 7-point likert scales, anchored at (1) not at all logical and (7)
extremely logical. The two coders’ scores for each dimension were collapsed (averaged) into a single
81
score for each review. Examples of reviews that were coded as extremely logical and not at all logical
are below:
Extremely logical: “Dr. X is a skilled dentist with a very nice personality. His staff is professional and courteous. However, I have a generally negative impression of their office because I feel that they are too aggressive in their dental practices. I have been to many dentists over my 36 years and my impression is that they range widely in how aggressive they are with respect to dental treatment. By aggressive, I mean favoring preventive treatments that may not be entirely necessary. My specific concern is that when they looked at my teeth they indicated that I have systems of periodontal infection and need antibiotics and a deep cleaning of some sort. They base this conclusion on the fact that when they put a metal probe in the space between the gum and the tooth it feels a little soft and bleeds, which indicates iminant [sic]gingivitis. Now I have never had a cavity in my life, have generally good dental practices, and no family history of gum problems. So to suddenly learn that I need extensive periodontal treatment seemed suspicious.”
Not at all logical: “Oh My god!! These people should be out of
business! How could they be in practice?? how? Seriously! not a friendly atmosphere, every time I have an appointment I see new personal [sic], very gentle and nice assistant disappeared few months ago, and in last 3 years 4 doctors left, Why???”
See Table 16 for level of agreement among coders, as well as descriptive statistics for each
dimension.
Study 6 Results and Discussion
The purpose of this analysis was to explore how negative reviews differed from positive
reviews. My descriptions and discussion of the review valence effects describe the valence variable as
categorical. However the statistics were conducted with the continuous form of this variable, in order
to maximize the power of the analyses. The correlation matrix in Table 17 summarizes the
relationships among the continuous review dimensions.
Table 17. Correlations between content dimensions of dental office reviews
N 1 2 3 4 51 Valence of Experience 63 0.027 .837** -0.419* 0.0762 Word Count 63 -0.029 0.107 .316*3 Valence of Review 63 -.249* -0.0074 Emotionality 63 -0.0065 Logicality 63
**Correlation is significant at .01 level (2 tailed).*Correlation is significant at .05 level (2 tailed).
Review dimensions. The correlational analyses detected a negative relationship between
reviews’ valence and emotionality r(63)=-.25, p<.05. This finding is consistent with the notion that
negative reciprocity motives play a larger role in review posting behavior than positive reciprocity
motives. Both forms of reciprocity elicit emotional expression in reviews. While negative reciprocity
82
(i.e. retaliating against the service provider) involves negative emotions such as anger, positive
reciprocity (i.e. gratitude to the service provide) involves positive emotions such as happiness. The fact
that positive reviews are not as emotional as negative reviews confirms that reciprocation of behavior
towards the service provider, which elicits emotional expression in reviews, is a weaker driver of
positive reviews than of negative reviews.
The previous findings suggest that while [reciprocating the service provider] is a stronger
motive for negative review occurrence than positive review occurrence, the obligation to the audience is
a stronger predictor of positive than negative review occurrence. Assuming that the feeling of
obligation to the audience encourages authors to post reviews that are helpful to those who read them,
positive reviews should be more helpful than negative reviews. The logicality dimension attempts to
gauge the extent to which the opinions presented in the reviews are supported by logical, easy-to-follow
arguments and are therefore more helpful to the reader. However, I did not find a significant positive
relationship between reviews’ valence and the extent to which they are logical, although the pattern was
in the predicted direction, (r(63)=.076, p>.2).
Content Categories. I also conducted a series of logistic regressions to explore whether
valence of reviews affected which content themes were mentioned (see Table 14 for regression
coefficients and significance levels). Interestingly, authors of positive reviews were marginally more
likely to mention price/cost of the dental procedures than authors of negative reviews β (.119)=.217,
p=.067. A parsimonious explanation for this finding may be that the fee structure in this particular
dental practice was its strength. But if positive reviews are driven by other (i.e. audience) focused
motive, while negative reviews are driven by self (i.e. self expression) focused motive, then the
difference in the frequency of the price theme across valence may also imply that the price issue was
perceived by the authors as more instrumental for the audience than for self expression.
Review Length. Table 18 summarizes the regression models predicting review length. I first
regressed word count on relationship strength, tenure, family members and patients’ satisfaction. I
expected reviews written for friends to be longer than those written for strangers, assuming that the
length of reviews operationalizes the level of effort authors put forth in posting the review. I also
expected authors with more tenure to write longer reviews than their less tenured counterparts, because
tenure brings a larger volume of experience, which authors can draw on for the content of their review.
This model revealed a significant positive tenure effect (β(6.99)=3.44 , p=.047), while the other effects
were not significant. Follow-up regression analysis also revealed a highly significant tenure x patient
satisfaction crossover interaction (β(1.48)=4.31 , p=.005). As Figure 19 illustrates, for positive review
(i.e. those reviews that were composed by patients whose average satisfaction ratings were 5.5 or
above) there was a positive relationship between authors’ tenure with the dental office and the reviews’
length, β(2.95)=11.78, p<.0001. But for negative reviews (i.e. those reviews that were composed by
authors whose average satisfaction ratings were 2.5 or below) there was a negative relationship between
authors’ tenure and reviews’ length, β(5.71)=-14.10, p=.022. This pattern may serve as evidence for a
83
feeling of entitlement to post negative reviews, in the absence of obligation to write longer (i.e. more
helpful) reviews.
84
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
β S.
Ep
Sati
sfac
tion
-1.0
14.
390.
82-1
2.58
7.64
0.11
-14.
337.
660.
066
-3.9
34.
130.
357.
823.
250.
02
Rel
at S
tren
gth
18.2
521
.07
0.39
45.0
824
.62
0.05
545
.084
24.1
550.
067
23.6
19.9
10.
2422
.12
25.6
20.
491
.51
23.5
60.
001
Ten
ure
6.99
3.44
0.04
77.
363.
350.
032
8.14
3.19
0.01
3-5
.03
4.35
0.25
-14.
055.
710.
022
11.7
82.
95<.
0001
Fam
ilyM
em-0
.19
6.61
0.97
81.
276.
460.
84-4
.77
6.22
0.94
-13.
689.
590.
173.
116.
280.
625
Sat
X R
elat
Strn
gth
18.0
59.
190.
054
18.9
29.
040.
041
Sat
X T
enur
e4.
151.
460.
006
4.31
1.48
0.00
5
*Mod
el fo
r N
egat
ive
revi
ews
^M
odel
of P
ositi
ve r
evie
ws
Mod
el 5
Mod
el 6
*M
odel
7^
Mod
el 1
Mod
el 2
Mod
el 3
Mod
el 4
Tab
le 1
8. M
odel
su
mm
ary
pred
ictin
g le
ngth
of
den
tal r
evie
ws
85
Figure 19. Relationship between patient tenure and review length, split by review valence
Recall, in Study 1, dissatisfied patients were nearly two times more likely to post reviews than
satisfied ones. This effect was attributed to patients’ stronger desire to retaliate than to show gratitude.
In order to retaliate, one must merely post a negative review, but it need not be long, especially if you
feel entitled to express your negative opinion at the expense of the service providers’ reputation. One
source of entitlement could be the patients’ tenure with the dental office. Taken together, this may
explain the negative relationship between tenure and word count.
Another interesting finding was a marginally significant relationship strength x patient
satisfaction interaction β(9.19)=18.1 , p=.041. I expected there to be an overall positive effect of
relationship strength on word count due to authors’ feeling of obligation to not merely post a review,
but to post a long (i.e. helpful) review. While this main effect was in the predicted direction, it was
only marginally significant when the significant interaction coefficient was entered in the regression
β(24.62)=45.1, p=.06. Follow-up analysis revealed that this positive effect of relationship strength on
review length was only significant for positive reviews β(23.56)=91.51 , p=.001. This pattern is in line
with previous findings that show obligation to the audience to drive positive reviews more than negative
reviews.
Summary
To summarize, the findings from this field experiment provide more direct evidence for
opinion expression norms favoring positive over negative consumer reviews. However, there are
important caveats in the data that shed light on the overarching question of what drives people to post
86
reviews. First, while relationship strength with the audience increased the occurrence of positive
reviews, I could not find any evidence for relationship strength reducing the occurrence of negative
reviews. Before accepting the null however, my hypothesis testing may be flawed in that the
operationalization of relationship strength as a categorical variable, and my comparison of friends with
recent neighbors, may not be a strong enough manipulation. Another interesting observation is that this
sample of dental office reviews suffers from a negative opinion expression bias (i.e. the average valence
of reviewed experiences was lower than the average valence of the entire sample of experiences.)
While this bias was [mollified] by patients’ relationship strength with their audience, even when
participants wrote reviews for friends, they were more likely to share negative than positive
experiences. This finding highlights the complexity in the overall motivational structure of review
occurrence.
This study also attempted to account for this negative expression bias, by exploring the role of
another motive for review posting behavior: reciprocating the service provider. The findings suggest
that the overall negative opinion expression bias may be due to stronger negative reciprocity intentions
(in the form of retaliation) than positive reciprocity intentions (in the from of gratitude), causing
patients to review their negative experiences with a higher frequency than positive experiences.
Specifically, tenure with the dental office (and thus operationalization of relationship strength with the
dental practice) exacerbated the negative opinion expression bias. If, indeed, the reciprocity motives are
driven by the strength of relationship with the service provider, then the data serves as preliminary
evidence for a role of reciprocity motives in review posting activity. It is important to note the limited
generalizability of these findings. The consumer experience that was the focus of this field study
involved an intimate relationship with a health service provider, which may not be as strong or even
present for other services and products. In fact, a doctor-patient relationship may be among the
strongest and most involved kinds of consumer-producer relationships. Hence, the reciprocity motive
and the resulting negative opinion expression bias may be less relevant for reviews of products and
services which lack a strong interpersonal component in their consumer experiences.
In addition to investigating review occurrence patterns, this study also explored review
characteristics. Although this part of the analysis lacked power due to the limited sample size, some
interesting patterns were noted. The negative relationship between reviews’ valence and the extent to
which they were emotional supports the notion that negative reviews are brought about by emotions
more than positive reviews. I also found that reviews written for friends were longer than reviews that
are written for neighbors, especially if those reviews were pronormative (i.e. positive). Patients’ tenure
with the dental practice also affected both the frequency and length of reviews. While tenure increased
the occurrence of negative reviews, it decreased their length. For positive reviews, tenure increased
both their occurrence and length. This pattern suggests that tenure is an operationalization not only of
relationship strength with the service provider, but also of other latent variables.
87
Finally, my exploration of content of reviews may partially explain why readers show a
preference for positive over negative reviews, as was evident in the Epinions data set as well as in the
CNET data set (Wu & Huberman, 2008). Specifically, stronger emotional expression in negative
reviews may distract readers from the communicator’s message, and make it less helpful to the
readers. In line with this argument, I also expected negative reviews to be less logical than positive
reviews. However, at least in this data set, this effect was not detected.
88
CHAPTER 6
Theoretical Contributions
While behavioral economics investigations provide robust evidence for the discrepancy
between product quality and product review distributions, little attention has so far been devoted to
the psychological antecedents of these reporting biases. This dissertation demonstrates that review
posting behavior is a complex social phenomenon, shaped by social norms and obligations.
Furthermore, the relevance and strength of these social pressures may vary due to contextual
factors, individual differences, and the countless interactions among them. Given the complexity
and fluidity of the motivational model of consumer reviews, I do not claim that social motives are
ubiquitous and dominate all review posting behavior. Rather, the empirical findings of this
dissertation exemplify numerous ways that social context shape review posting patterns, and in this
way produce systematic reporting biases in review valence distributions.
The investigation of Epinions reviews and their authors (Chapters 2 and 3) showed that social
pressures differentially promote and prevent consumer opinion expression as a factor of the valence of
the consumers’ experience. The statistical analyses support my overarching prediction that the
overrepresentation of positive reviews and/or underrepresentation of negative reviews in the review
valence distribution is driven by authors who are more active and collectively identified.
The vignette study findings (Chapter 4) provided two additional forms of evidence for reviews
as speech acts. First, the finding that a service’s previous review history influenced participants’
willingness to post subsequent reviews supports the notion that reviews are interdependent speech acts,
rather than isolated bursts of self expression. Second, the positive relationship between collective
identity and willingness to post a positive review, and the negative relationship between self-monitoring
and the willingness to post a negative review, suggest that reviews are a product of social motives such
as obligations to the audience and impression management concerns. These findings challenge the
notion of reviews’ authors as electronic sociopaths, and contribute to the growing realization in the
literature that review posting behavior is governed by a complex network of social norms.
This dissertation is also the first to combine archival and survey methodology in the study of
consumer review posting behavior. The mixed method approach permits me not only to document the
valence distribution of reviews, but also to track the social psychological mechanism that may account
for its shape. Because previous phenomenological studies were conducted on product level of analyses,
the researchers were blind to the authors’ psychology and could not investigate the source of the J
shape (i.e. positive skew) that typically characterizes product review valence distributions (Hu et al.,
2007). This dissertation attempts to fill this void, by introducing author characteristics into the model
and showing that the J shape in the review valence distribution is driven by the most active and
collectively identified authors.
These findings also specify the boundary conditions for the positive opinion reporting bias,
which I argue is the source of the positive skew in the products’ review valence distributions.
89
Specifically, I expect the J shape to be less prevalent in consumer review forums that lack community
features, attract less experienced authors, or are governed by social pressures that favor negative instead
of positive reviews. Results of the dental office field experiment (Chapter 5) speak to another boundary
condition of a positive opinion reporting bias in the review valence distribution: the relative importance
of consumers’ relationship with the service provider in the overall motivational model of posting
behavior. Although I confirmed that the normative pressures influencing patients’ posting behavior
selects for positive over negative reviews, I detected a strong negative reporting bias. This pattern was
traced back to consumers’ stronger negative reciprocity (i.e. retaliation) than positive reciprocity (i.e.
gratitude) intentions toward the service provider. Given these findings, I expect that the review valence
distribution of services that involve a meaningful relationship between the provider and the consumer
(e.g. healthcare) would not be J-shaped, but instead L-shaped (i.e. negatively skewed), with more
negative than positive reviews. Ultimately, the direction of the reporting bias for a particular product,
within a specific consumer review forum, depends on the relative importance of reciprocity intentions
and social conformity pressures within the whole motivational model of review posting behavior.
Overall, this dissertation shows that social norms are relevant in computer mediated
communication. Through a variety of methods I demonstrate how eWOM, in particular consumer
reviews, are speech acts with the social goal of sustaining a meaningful exchange of information
(Grice, 1989; Ho & Swan, 2004). Although it is a virtual truism that traditional WOM is an act of
cooperative communication, the important differences between offline and online contexts
warranted this dissertation in order to establish the latter as a social act as well. My findings imply
that the production of online consumer reviews is a social process, regulated by social norms, and
motivated by social obligations. One methodological implication for researchers of traditional
WOM communication is that online consumer review forums can serve as legitimate and organized
sources of empirical data, the abundance of which is nearly impossible to produce offline or in
laboratory experiments. Although WOM communication is one of the oldest and most ubiquitous
forms of consumer behavior, our understanding of its causes and effects, are minimal and
theoretical due to data source limitations. As researchers tap into the countless archives of online
reviews, our understanding of the antecedents and implications of WOM communication will
progress at a much quicker rate than in the past.
In conclusion, this dissertation contributes to a growing body of work showing that social
factors help account for variations in technology usage. In this respect, this investigation supports the
notion of computer-mediated communication as a speech act with the social goal of sustaining
meaningful exchange of information. The influence of social factors in computer-mediated
communication indicates that although the medium may alter “normal” interactions, it still provides a
vehicle for normative social regulation.
90
Practical Implications
Although it is premature to draw definitive conclusions from this exploratory investigation,
there are important takeaways that are worthy of discussion for forum administrators, consumers, and
marketers.
Forum administrators
Marketing literature confirms that consumers pay attention to online product reviews and
act upon them to make purchasing decisions (Chatterjee, 2001; Chevalier & Mayzlin, 2006;
Senecal & Nantel, 2004). In fact, several companies (e.g. Epinions.com, Amazon.com,
Citisearch.com, Angie’sList.com, Yelp, etc.) have recognized a business opportunity in this
phenomenon and are proactively trying to induce consumers to “speak the word” on their online
platforms about products and services (Godes et al., 2005). The findings from this investigation
show that the social environment of the consumer review forum can introduce systematic reporting
biases in product reviews. For example, because the helpfulness ratings of Epinions reviews
rewarded positive over negative reviews, authors who were more experienced and collectively
identified with the Epinions community wrote more positive than negative reviews. Interestingly,
inexperienced authors (i.e. one-time reviewers) wrote more negative than positive reviews. These
findings suggest that a review forum’s tendency to attract one-time or experienced reviewers
should affect the review valence distribution of that particular forum.
Forums with community features and social networking abilities are likely to attract
collectively identified authors who are motivated to conform to both the descriptive and
prescriptive social norms of that forum. Yelp, for example, is a hybrid of social networking and
consumer review platform. By encouraging their users to develop their identities and form direct
relationships with each other, Yelp administrators encourage posting activity and increase author
accountability. While this socially rich structure may increase the quality of individual reviews,
when the reviews are aggregated to provide summary statistics of a given service provider,
systematic biases may still be present. Yelp administrators state that their overall review
distribution shows more positive than negative reviews,12 which makes positive reviews more
typical of Yelp posting behavior than negative reviews. Even if this descriptive norm is not
accompanied by a prescriptive norm favoring positive over negative reviews (as it is on Epinions
and CNET), authors may still use this norm to guide their behavior and adhere to it for consensus’
sake. Ironically, the community features that increase the quality of individual reviews may reduce
the utility of these reviews in aggregate, due to norm conformity biases.
There are a number of ways that forums like Yelp can combat the negative side effects of
community features. First, Yelp administrators can ask authors to report on their level of comfort
12 http://www.yelp.com/business/common_questions
91
in posting positive and negative reviews. They can then use this information in weighing the
relative influence of a given review on the aggregated service rating score. For example, a
negative review written by an author who does not feel comfortable posting negative reviews
should be given more weight than a negative review written by an author who feels very
comfortable posting negative reviews. Yelp already incorporates a variety of author characteristics
in producing summary statistics for services, so this is another variable that they should collect and
consider. Second, Yelp can ask authors whether they read previous reviews of the service before
posting their own review. The findings from the vignette study (Chapter 4) suggest that authors
who are collectively identified and have elevated impression management concerns are hesitant to
report an opinion that disturbs consensus. By knowing whether authors considered previous
reviews before posting their own, and the valence of those previous reviews, Yelp administrators
can gauge whether pressures toward consensus were present. Both of these suggestions involve
collecting information about the social context at the time the review was posted. It is important to
note that while incorporating social contextual information may reduce biases in aggregated
statistics, they may also reduce authors’ motivation to post, thereby decreasing the overall reporting
rate because the requirement to answer these questions increases the amount of effort the authors
must exert.
On the other extreme of Yelp are forums like Citysearch and Healthgrades.com, which
have no community features. While the valence distributions of reviews and ratings on these sites
may not be skewed by normative pressures, the lack of accountability may reduce the overall
quality of reviews. The minimal amount of effort required to share one’s opinion drives the
volume of activity on these websites, providing readers with a large array of diverse opinions. The
findings from the dental reviews field experiment suggest that reporting biases may still exist due
to inequity in the strength of reciprocity motives resulting from positing and negative service
experiences. One way to address this valence-based reporting bias is to actively solicit and reward
consumers to post reviews. Specifically, forum administrators can capitalize on the tendency of
financial incentives to dominate and crowd out other motives. Essentially, by offering financial
incentives to consumers, the effects of other motives, which are driving valence-based reporting
biases, will be reduced.
The most recent trend in eWOM communication is consumer review aggregation sites,
such as Google Reviews. These data mining systems collect reviews across forums, publish them
in one place, and summarize them into aggregate statistics such as means. In this way, consumers
are provided with the maximum volume of WOM information about products and services without
having to visit numerous consumer forums where these reviews were originally posted.
Furthermore, the forum specific reporting biases should be washed out when the reviews are
aggregated across forums which vary in how they elicit reviews, the type of authors they attract,
and their social normative landscapes. The drawback to aggregating across forums is that reviews
92
are stripped of their social normative context, making it more difficult for readers to interpret and
weigh a given review. When a consumer reads a review republished on Google, more often than
not, they are unfamiliar with the social context of the forum on which it was posted. Google
Reviews addresses this to some extent, by providing a link to the source of the reviews.
Consumers, however, may not be familiar with the specific norms of the forum to which the review
is linked or may not be motivated to actively seek out this information in the first place.
One way to address the lack of social context information is for aggregation sites such as
Google Reviews to incorporate forum characteristics in the calculation of aggregate statistics. For
instance, positive reviews from Epinions should weigh less than positive reviews from other
forums that are not dominated by positive reviews. Other than overall review valence
distributions, forums can be assessed on other relevant characteristics such as the extent to which
authors are accountable and committed to the forum community, whether the community consists
of experts, enthusiasts, or naïve consumers, etc. Essentially, by incorporating forum effects into
aggregation statistics, sites like Google Reviews will aid consumers with not just gathering eWOM
information but also managing and interpreting it.
Consumers
While WOM has traditionally been spread among acquaintances through social
“contagion”, the internet has dramatically increased the scale and availability of WOM for the
global community of internet users (Dellarocas, 2003). The main takeaway of this dissertation for
consumers is to keep in mind that reviews, like traditional WOM, are intentional social actions that
are motivated by a complex set of extrinsic and intrinsic motives. When interpreting a given
review, understanding the motivational structure of that review is as important as its content,
because reporting biases will always be present as long a review posting behavior is voluntary.
One source of motives is the social normative landscape of the forum. Therefore, it pays for
consumers to be familiar with the activity patterns, community structure and social norms of the
forums that they use as a source of reviews. In addition, when selecting which forums to use as a
source of reviews, consumers should pick forums that provide them with the most information
about the authors and these authors’ review posting history.
Consumers must also be careful in interpreting aggregated review statistics. Within a
given forum, aggregated statistics can be biased due to the social normative landscape of that
forum. For instance, the Epinions forum has a positive opinion reporting bias. For aggregated
review statistics gathered across forums (e.g. Google Reviews), the effects of forum specific biases
may be diluted. However, more ambiguity is introduced in interpreting a given review, because
when it is published on the aggregator site, it is stripped of information about the author and the
surrounding social context. To sum up, consumers who seek the most representative review
aggregation statistics should refer to aggregator sites; consumers who seek the highest-quality
93
reviews, and plan to read and interpret their content, should refer to forums that have advanced
community features, making the authors more accountable and obligated to their audience.
Marketers
Consumers pay attention to online product reviews and act upon them to make purchasing
decisions (Chatterjee, 2001; Chevalier & Mayzlin, 2006; Senecal & Nantel, 2004). A study
conducted by DoubleClick in 2009 found that eWOM plays a very important role in consumers’
purchasing process for many types of products and services. Among web users, content on the web
has moved into second place, ahead of printed reviews and advice from sales people in influencing
consumer decisions (Rubicon Win Marketing, 2008). In fact, research in behavioral economics
repeatedly finds positive relationships between user-generated content and product sales (Chevalier
& Mayzlin, 2006; Senecal, Nantel, 2004; Dellarocas et al., 2004; Godes and Mayzlin, 2004). It is
therefore crucial for marketers to manage their public image online. Given the reporting biases that
plague review posting behavior, providing a superior consumer experience does not guarantee a
superior reputation. As the findings from the dental study (Chapter 5) illustrate, dissatisfied
consumers were more than twice as likely to post reviews as satisfied ones. The preliminary
findings suggest that this was driven by negative reciprocity trumping over positive reciprocity
motives. To combat this valence-based reporting bias, the dental office can provide an effective
avenue for their patients to complain and resolve their issue before they have a chance to publicly
complain by posting a negative review. The dental office can also increase the positive reciprocity
intentions of their satisfied patients by soliciting them to write a review. The latter solution, while
in the interests of the service provider, creates another bias, and in this way reduces the utility of
these reviews for consumers.
Reviews are increasingly being used by marketers as measures of customer satisfaction.
Although this rich behavioral data is readily available and requires no effort to generate or collect,
reporting bias can lead to erroneous conclusions, when single point estimators are used, such as a mean
or percent of high reviews. Biases in distributions of online product reviews can lead to invalid
statistical assumptions and conclusions about consumer product preferences, resulting in incorrect
marketing decisions. While this dissertation provides compelling evidence that the biases exist, their
specific direction and strength should not be generalized outside the datasets. It is therefore beyond the
scope of this exploratory investigation to make specific recommendations on how to account for
reporting biases in review posting behavior.
Concluding Statement
Online reviews play an increasingly important role in consumer’s purchasing decisions. Such
internet-enabled consumer WOM communication offers society a tremendous potential to reduce
information asymmetries and in this way, increase the efficiency of electronic and traditional markets
94
(Dellarocas, 2005). Voluntary reporting, however, introduces the potential for reporting biases. In this
dissertation, I have shown through various methods and contexts that the propensity to report a
privately observed outcome (i.e. consumer experience) to a public online forum is conditioned on the
social normative landscape of this public space. The implications of these and future findings are clear:
Distribution of online product reviews can lead to inefficiencies in consumer choice and even erroneous
conclusions about consumer product preferences.
95
REFERENCES
Ainsworth, A. B. (2004). Thiscompanysucks.com: the use of the Internet in negative consumer-to consumer articulations. Journal of Marketing Communications, (10). 169-182.
Abrams, D., Marques, J. M., Bown, N., & Henson, M. (2000). Pro-norm and anti-norm deviance within
and between groups. Journal of Personality and Social Psychology, 78(5), 906-912. Bagozzi, R. P., Dholakia, U. M. (2002). Intentional social action in virtual communities. Journal of
Interactive Marketing, 16(2), 2-21. Bagozzi, R. P., & Lee, K. (2002). Multiple routes for social influence: The role of compliance,
internalization, and social identity. Social Psychology Quarterly, 65(3), 226-247. Banerjee, A., (1993). The economics of rumours. Review of Economic Studies, 60, 309-27. Bichart, B., Schindler, R. M. (2001). Internet forums as influential sources of consumer information.
Journal of Interactive Marketing, 15 (3). 31-40. Biyalogorsky, E., Gerstner, E., & Libai, B. (2001). Customer referral management: Optimal reward
programs. Marketing Science, 20 (1), 82-95. Bouas, K. S., & Arrow, H. (1996). The development of group identity in computer and face-to-face
groups with membership change. Computer-Supported Cooperative Work, 4, 127-152. Brown, J. J. and Reingen, P.H. (1987). Social ties and word-of-mouth referral behavior. Journal of
Consumer Research, 14 (4), 350-362. Burnett, G., Buerkle, H. (2004). Information exchange in virtual communities: A comparative study.
Journal of Computer-Mediated Communication 9 (2). Available online: http://www.ascusc.org/jcmc
Buttle, F.A. (1998). Word-of-mouth: understanding and managing referral marketing. Journal of
Strategic Marketing, 6, 241-254. Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews.
Journal of Marketing Research, 43(3), 345-354. Chatterjee, P. (2001). Online reviews – do consumers use them? In M. C. Gilly & Myers-Levy (Eds)
ACR 2001 Proceedings, (pp. 129-134). Provo, UT. Association for Consumer Research. Dabholkar, P. A., & Bagozzi, R. P. (2002). An attitudinal model of technology-based self-service:
Moderating effects of consumer traits and situational factors. Journal of the Academy of Marketing Science, 30(3), 184-201.
Dellarocas, C. (2003). The digitization of word of mouth: Promise and challenges of online feedback
mechanisms. Management Science. Special Issue: E-Business and Management Science, 49(10), 1407-1024.
Dellarocas, C. (2006). Strategic manipulation of internet opinion forums: Implications for consumers
and firms. Management Science, 52(10), 1577-1593. Dellarocas, Chrysanthos, Awad, Neveen, & Zhang, X. (2004). Exploring the value of online reviews to
organizations: Implications for revenue forecasting and planning. Proceedings of the 24th International Conference on Information Systems, Washington D.C.
96
Dellarocas, Chrysanthos N. and Wood, Charles A. (2006). The Sound of silence in online feedback:
Estimating trading risks in the presence of reporting bias. Robert H. Smith School Research Paper No. RHS 06-041. Available at SSRN: http://ssrn.com/abstract=923823.
Dellarocas, C., Narayan, R. (2006). What motivates a consumer to review a product online? A study of
the product-specific antecedents of online motive reviews. Workshop on Information Systems and Economics.
Dellarocas, C., Zhang, X., & Awad, N. F. (2007). Exploring the value of online product reviews in
forecasting sales: The case of motion pictures. Journal of Interactive Marketing, 21(4), 23-45. DeSanctis, G., Gallupe, R.B. (1987). A foundation for the study of group decision support systems.
Management Science 33, 589-609. DoubleClick (2007), “DoubleClick’s Touchpoints IV: The Changing Purchase Process.” Eagly, A. H., Wood, W., & Chaiken, S. (1978). Causal inferences about communicators and their effect
on opinion change. Journal of Personality and Social Psychology, 36(4), 424-435. Eliashberg, J., Jonker J., Sawhney, M., & Wierenga, B. (2000). MOVIEMOD: An implementable
decision-support system for prerelease market evaluation of motion pictures. Marketing Science, 19 (3), 226-243.
Eliashberg, J. & Shugan, S. (1997). Film critics: influencers or predictors? Journal of Marketing, 61
(2), 68-78. Engel, J. F., Kegerreis, R. J., & Blackwell, R.D. (1969). Word-of-mouth communication by the
innovator. The Journal of Marketing, 33(3), 15-19. Engel, J.F., & Light, M. L. (1968). The role of psychological commitment in consumer behavior: An
evaluation of the theory of cognitive dissonance. In F. M. Bass, C. W. King & E. M. Pessemier (Eds.). (pp. 39-68). Applications of the Science in Marketing Management. New York, NY: John Wiley & Sons, Inc.
Finholt, T., & Sproull, L. S. (1990). Electronic groups at work. Organization Science, 1, 41-64. Gao, G., Gu, B. (2008). The dynamics of online consumer reviews. Workshop on Information Systems
and Economics. Ghose, A., Ipeirotis, P. G. (2007). Designing novel review ranking systems: Predicting usefulness and
impact of reviews. Paper presented at the International Conference on Electronic Commerce, Minneapolis, Minnesota, USA.
Godes, D., Mayzlin, D. (2004). Using online conversations to study word-of-mouth communication
Marketing Science, 23 (4), 545–560. Grice, H. P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. Hagel, J., & Armstrong, A. G. (1997) Net Gain: Expanding markets through virtual communities.
Harvard Business School Press, Cambridge, MS. Hauben, M., Hauben, R. (1999). On the History and Impact of Usenet and the Internet. IEEE Computer
Society Press, Los Alamitos, CA,
97
Henning-Thurau, T., Gwinner, K., Walsh, G., Gremler, D. (2004). Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the internet? Journal of Interactive Marketing, 18(1), 38-52.
Ho, C., & Swan, K. (2007). Evaluating online conservation in an asynchronous learning environment:
An application of Grice’s cooperative principle. Internet and Higher Education, 10, 3-14. Herr, P.M., Kardes, F.R., Kim, J. (1991). Effects of word-of-mouth and product-attribute information
on persuasion: An accessibility-diagnosticity perspective. Journal of Consumer Research 17 (March), 454-462.
Hiltz, S.R., Johnson, K., Turoff, M. (1986). Experiments in group decision making. Human
Communication Research, 13, 225-252. Hoffman, D.L., Novak, Th.P., Chatterjee, P. (1995). Commercial scenarios for the web: Opportunities
and challenges. Journal of Computer-Mediated Communication 1 (3). Available online: http://www.ascusc.org/jcmc.
Hogg, M. A., & Turner, J. C. (1985). Interpersonal attraction, social identification and psychological
group formation. European Journal of Social Psychology, 15(1), 51-66. Holmes, J.H., Lett, Jr., J.D. (1977). Product sampling and word of mouth. Journal of Advertising
Research, 17, 35-40. Hong, J., & Lee, W. (2005). Consumer complaining behavior in the online environment. In Y. Gau.
(Ed.) Web System Design and Online Consumer Behavior, (pp. 90-106), Hershey, PN. Horrigan, J.B., Rainie, L. (2002). Getting Serious Online. Washington, D.C., Pew Internet & American Life Project. Available online: http://www.pewinternet.org/reports. Hu, N., Pavlou, P., & Zhang, J. (2006). Can online product reviews reveal a product’s true quality?
Empirical findings and analytical modeling of online word-of-mouth communication. Proceedings of the 7th ACM Conference on Electronic Commerce (pp. 324-330). Ann Arbor, Michigan, USA.
Hu, N., Pavlou, P., & Zhang, J. (2007). Why do online product reviews have a J-shaped distribution? Overcoming biases in online word-of-mouth communication. (Working paper.) Jones, S.G. (1998). Information, internet, and community: Notes towards an understanding of
community in the Information Age. In Jones, S.G. (Ed.), CyberSociety 2.0: Revisiting Computer-Mediated Communication and Community (pp. 1-35). Thousand Oaks/London/New Delhi, SAGE Publications: 1-35.
Kiesler, S., Siegel, J., McGuire, T.W. (1984). Social psychological aspects of computer-mediated
communication. American Psychologist, 39 (10), 1123-1134. Kowalski, R.M., & Cantell, C. C. (1995). Psychometric properties of the propensity to complain scale.
(Unpublished manuscript.) Western Carolina University, Cullowhee, NC. Kozinets, R.V. (1997). I Want to Believe: a netnography of The X-Philes’ subculture of consumption.
In M. Brucks & D. MacInnis (Eds.), Advances in Consumer Research 24, (pp. 470-475), Provo, UT.
98
Kozinets, R.V. (1999). Group decision making and communication technology. Organizational Behavior and Human Decision Processes, 52, 96-123.
Kozinets, R.V. (2002a). The field behind the screen: Using netnography for marketing research in
online communities. Journal of Marketing Research, 39, 61-72. Kozinets, R.V. (2002b). Can consumers escape the market? Emancipatory illuminations from burning
man. Journal of Consumer Research, 29, 20-38. Krider, R. & Weinberg, C. (1998). Competitive dynamics and the introduction of new products: The
motion picture timing game. Journal of Marketing Research, 35 (1), 1-15. Laczniak, R. N., DeCarlo, T. E., & Ramaswami. S. N. (2001). Consumers’ responses to negative word-
of-mouth communication: An attribution theory perspective. Journal of Consumer Psychology, 11 (1), 57-73.
Lea, M., & Spears, R. (1991). Computer-mediated communication, de-individuation and group
decision-making. International Journal of Man-Machine, 39, 283-301. Lea, N., Spears, R., & de Groot, D. (1998). Knowing me, knowing you: Effects of visual anonymity on
stereotyping and attraction in computer-mediated groups. (Unpublished manuscript.) Lenhart, A., Horrigan, J., Fallows, D. (2004). Content Creation Online. Washington, D.C., Pew Internet
& American Life Project. Available online: http://www.pewinternet.org/reports Lennox, R. D., & Wolfe, R. N. (1984). Revision of the self-monitoring scale. Journal of Personality
and Social Psychology, 46 (6), 1349-1364. Luhtanen, R., & Crocker, J. (1992). Collective self-esteem: Self-evaluation of one’s social identity.
Personality and Social Psychology Bulletin, 18 (3), 302-318. Mahajan, V., Muller, E. and Kerin, R.A. (1984). Introduction strategy for new products with positive
and negative word-of-mouth. Management Science, 30 (12), 1389-1404. Marquis, M., & Filliatrault, P. (2002). Understanding complaining responses through consumers’ self-
consciousness disposition. Psychology and Marketing, 19 (3), 267-292. Marques, J., & Paez, D. (1994). The black sheep effect: Social categorization, rejection of in-group
deviates, and perception of group variability. In W. Stroebe & M. Hewstone (Eds.), European Review of Social Psychology (Vol. 5, pp. 37-68). Chichester, UK: Wiley.
Marques, J. M., Yzerbyt, V. Y., & Leyens, J. P. (1988). The ‘‘black sheep effect’’: Extremity of
judgments towards in-group members as a function of group identification. European Journal of Social Psychology, 18, 1-16.
Max, H., & Mace, M. (2008). Online communities and their impact on business: Ignore at your peril.
Rubicon Consulting Inc. http://rubiconconsulting.com/insight/whitepapers/2008/10/. McGuire, T.W., Kiesler, S., Siegel, J. (1987). Group and computer-mediated discussion effects in risk
decision making. Journal of Personality and Social Psychology, 52, 917-930. Mittal, V., Ross, W. T., & Baldasare, P. M. (1998). The asymmetric impact of negative and positive
attribute-level performance on overall satisfaction and repurchase intentions. The Journal of Marketing, 62 (1). 33-47.
99
Mizerski, R.W. (1982). An attribution explanation of the disproportionate influence of unfavorable information. Journal of Consumer Research, 9, 301-310.
Nyer, P. U. (2000). An investigation into whether complaining can cause increased consumer
satisfaction. Journal of Consumer Marketing, 17 (1), 9-19. Parks, M.R., Floyd, K. (1996). Making friends in cyberspace. Journal of Computer-Mediated
Communication, 1 (4). Available online: http://www.ascusc.org/jcmc. Parks, M.R., Roberts, L.D. (1998). Making MOOsic: The development of personal relationships online
and a comparison to their offline counterparts. Journal of Social and Personal Relationships, 15, 517-537.
Postmes, T. (1997). Social influence in computer-mediated groups. Unpublished Ph.D. thesis,
University of Amsterdam, The Netherlands. Postmes, T., & Spears, R. (1998). Deindividuation and anti-normative behavior: A meta-analysis.
Psychological Bulletin, 123, 238-259. Postmes, T., & Spears, R. (1999). Contextual moderators of gender differences and stereotyping in
computer-mediated group discussions. (Manuscript submitted for publication.) Postmes, T., Spears, R., & Lea, M. (1999). Social identity, normative content, and "deindividuation" in
computer-mediated groups. In N. Ellemers, R. Spears & B. Doosje (Eds.), Social Identity: Context, Commitment,Content. (pp. 164-183). Oxford, England: Blackwell Science.
Postmes, T., Spears, R., & Lea, M. (2000). The formation of group norms in computer-mediated
communication. Human Communication Research, 26(3), 341-371. Postmes, T., Spears, R., Sakhel, K., & de Groot, D. (1998). Social influence in computer-mediated
groups: The effects of anonymity on group behavior. (Unpublished manuscript.) Price, L.L., Feick, L.F. (1984). The role of interpersonal sources and external search: An informational
perspective. In Th.C. Kinnear (Ed.), Advances in Consumer Research, 11, (pp. 250-255). Provo, UT, Association for Consumer Research.
Rice, R.E., Love, G. (1987). Electronic emotion: Socio-emotional content in a computer-mediated
network. Communication Research, 14, 85-108. Rossiter, J.R., Percy, L. (1997). Advertising Communications & Promotion Management. New York,
NJ, McGraw-Hill Companies, Inc. Rothaermel, F.T., Sugiyama, S. (2001). Virtual internet communities and commercial success:
Individual and community-level theory grounded in the atypical case of TimeZone.com. Journal of Management, 27 (3), 297-312.
Schindler, R. M., & Bickart, B. (2005). Published word of mouth: Referable, consumer-generated
information on the internet. In C. P. Haugtvedt, K. A. Machleit & R. F. Yalch (Eds.), Advertising and Consumer Psychology Conference, Seattle, WA, US (pp. 35-61). Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers.
Senecal, S. and Nantel, J., (2004). The influence of online product recommendations on consumers’
online choices. Journal of Retailing, 80, 159–169.
100
Singh, J. (1990). Voice, exit and negative word-of-mouth behaviors: An investigation across three service categories. Journal of the Academy of Marketing Science, 18(1), 1-15.
Sheth, J.N., (1971). Word of mouth in low risk innovations. Journal of Advertising Research, 11, 15–18 Slama, M., & Celuch, K. (1994). Assertion and Attention to social comparison information as
influences on consumer complaint intentions. Journal of Consumer Satisfaction (7), 246-251. Snyder, M., Gangestad, S. (1986). On the nature of self-monitoring: Matters of assessment, matters of
validity. Journal of Personality and Social Psychology, 51(1), 125-139. Spears, R., Lea, M., & Lee, S. (1990). De-individuation and group polarization in computer-mediated
communication. British Journal of Social Psychology, 29. 121-134. Sproull, L., Kiesler, S. (1986). Reducing social context cues: Electronic mail in organizational
communication. Management Science, 23 (11), 1492-1512. Stuteville, J. R. (1968). The buyer as a salesman. The Journal of Marketing, 32(3), 14-18. Sundaram, D.S., Mitra, K., Webster, C. (1998). Word-of-mouth communications: A motivational
analysis. Advances in Consumer Research, 25, 527-531. Swartz, T.A., Stephens, N. (1984). Information search for services: The maturity segment. In TH.C.
Kinnear (Ed.), Advances in Consumer Research 11, (pp. 244-249). Provo, UT, Association for Consumer Research.
Tajfel, H. (1978). Differentiation between social groups: Studies in the social psychology of intergroup
relations. Oxford, England: Academic Press. Tajfel, H., & Turner, J. C. (1986). Social identity theory of intergroup behavior. In W. Austin & S.
Worchel (Eds.), Psychology of IntergroupRelations (2nd ed., pp. 7-24). Chicago, IL: Nelson-Hall.
Walther, J. B. (1997). Group and interpersonal effects in interpersonal computer-mediated
communication. Human Communication Research, 23, 342-369. Wu, Fang & Huberman, Bernardo A. (2008). Public discourse in the web does not exhibit group
polarization. Available at SSRN: http://ssrn.com/abstract=1052321. Wu, F., & Huberman, B. (2008). Public discourse in the web does not exhibit group polarization.
(Unpublished manuscript.)