Measuring the Polarization Effects of Bot Accounts in the ...

10
Measuring the Polarization Effects of Bot Accounts in the U.S. Gun Control Debate on Social Media Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu {mozer,yigityildirim,hdavulcu}@asu.edu Arizona State University School of Computing, Informatics, and Decision Science Engineering Tempe, AZ, U.S. ABSTRACT The design choices made by the developers of social media platforms are under public scrutiny. Socio-political polarization in information diffusion networks is irrefutably evident, and this raises a question around the acclaimed deliberative nature of communications taking place on these platforms. Moreover, recent studies have shown that they are vulnerable to be gamed by deceptive automated activities. In this work, we focus on the polarizing effects of bot activities on a political social media network. By studying retweet network of 3.7 million Twitter users’ during the tragic Stoneman Douglas High School shooting event and its 1-month aftermath, we find that bot accounts heavily contributed to online polarization. We use random walk controversy (RWC) score to show that bots lead to statistically significant increased polarization on 65% of the most popular debate related hashtags. Next, we report four factors predicting increased diffusion rates for bot content; use of memes/videos (by an IRR factor of %59), use of LIWC they category keywords (by %18), moral-emotional words (by %11), and bot accounts’ follower count (by %0.01). We further investigate the effect of self-disclosure as an automated account. We find no evidence that self-disclosing automated accounts contribute to polarization. Furthermore, we also observe a 10-fold decrease in engagement levels with self-disclosing automated accounts by human users. ACM Reference Format: Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu. 2019. Measuring the Polarization Effects of Bot Accounts in the U.S. Gun Control Debate on Social Media. In Proceedings of ACM Conference (Conference’17). ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION Social media has been one of the most prominent mediums in socio- political communication for the last decade. Its wide accessibility, ease of use, and reach out capacity have attracted millions to par- ticipate in political debates on these platforms. People organized protest movements [38, 43], toppled down authoritarian regimes [40] with social media in their action toolkit. Social media has also become instrumental in campaigning for underrepresented issues Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference’17, July 2017, Washington, DC, USA © 2019 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn and communities with hashtag activism. It lead to stronger voices and awareness in mainstream media as in the cases of #metoo [31] and #blacklivesmatter [12] movements. The act of retweeting undeniably plays a crucial role in these in- formation dissemination and campaign building processes on Twitter [9]. When Twitter users want to re-post or share some other users’ content in their own profile, they simply use the retweet functionality of the platform. This simple mechanism has given users capability to share posts of others they like with their own followers. Its use, however, has reached beyond a simple intent to share when ana- lyzed at broader scale. Scholars present numerous anecdotal findings showing the connection between the use of this functionality and the political homophily [16, 17] whereas Barbera et al. [4] present evidence for the cases where cross-camp interactions are present such as in the Boston bombing, Winter olympics, and Super Bowl events. However, the overall polarization on major political issues on Twitter has risen between 10% and 20% [24] over the last decade. Positive and controversial aspects aside, wide accessibility of social media also attract malign use of these platforms at multiple levels. Researchers and data journalists investigate and report several cases of abuse including but not limited to the issues of cyberbul- lying [28], anti-vaccination [33], ISIS propaganda [21], and white supremacist [34] propaganda. In the majority of the malign cases, automated accounts (a.k.a. bots) are found to be playing a signifi- cant role as well [11, 22, 36]. In this work, we focus on automated accounts’ role in political polarization on Twitter retweet networks. We quantify the polarization impact that automated accounts induce and its textual, emotional, and behavioral correlates. To the best of our knowledge, this is the first work that tackles the problem of measuring the polarization impact of automated activity on social media. Our investigation proceeds in two-folds; on synthetically gen- erated networks and on a real-world social media network. First, we set up synthetically generated network scenarios to evaluate the robustness of our experimental design. Second we focus on a Twitter dataset that span the 5 weeks period following the tragic Stoneman Douglas School shooting. In synthetic scenario, we (1) produce po- larized user networks, (2) quantify the polarization, and (3) measure the impact of random node removals on polarization. We find no evidence that random removals significantly affect the polarization measurements on synthetically generated networks. This finding motivates us to employ a similar removal experiment on a real-world Twitter dataset. Using the Twitter dataset, we build retweet networks at hashtag and aggregate levels. We show that removing automated accounts from retweet networks significantly reduces the polariza- tion while random removals do not.

Transcript of Measuring the Polarization Effects of Bot Accounts in the ...

Page 1: Measuring the Polarization Effects of Bot Accounts in the ...

Measuring the Polarization Effects of Bot Accounts in the U.S.Gun Control Debate on Social Media

Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu{mozer,yigityildirim,hdavulcu}@asu.edu

Arizona State UniversitySchool of Computing, Informatics, and Decision Science Engineering

Tempe, AZ, U.S.

ABSTRACTThe design choices made by the developers of social media platformsare under public scrutiny. Socio-political polarization in informationdiffusion networks is irrefutably evident, and this raises a questionaround the acclaimed deliberative nature of communications takingplace on these platforms. Moreover, recent studies have shown thatthey are vulnerable to be gamed by deceptive automated activities.In this work, we focus on the polarizing effects of bot activities ona political social media network. By studying retweet network of3.7 million Twitter users’ during the tragic Stoneman Douglas HighSchool shooting event and its 1-month aftermath, we find that botaccounts heavily contributed to online polarization. We use randomwalk controversy (RWC) score to show that bots lead to statisticallysignificant increased polarization on 65% of the most popular debaterelated hashtags. Next, we report four factors predicting increaseddiffusion rates for bot content; use of memes/videos (by an IRRfactor of %59), use of LIWC they category keywords (by %18),moral-emotional words (by %11), and bot accounts’ follower count(by %0.01). We further investigate the effect of self-disclosure asan automated account. We find no evidence that self-disclosingautomated accounts contribute to polarization. Furthermore, we alsoobserve a 10-fold decrease in engagement levels with self-disclosingautomated accounts by human users.

ACM Reference Format:Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu. 2019. Measuring thePolarization Effects of Bot Accounts in the U.S. Gun Control Debate onSocial Media. In Proceedings of ACM Conference (Conference’17). ACM,New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

1 INTRODUCTIONSocial media has been one of the most prominent mediums in socio-political communication for the last decade. Its wide accessibility,ease of use, and reach out capacity have attracted millions to par-ticipate in political debates on these platforms. People organizedprotest movements [38, 43], toppled down authoritarian regimes[40] with social media in their action toolkit. Social media has alsobecome instrumental in campaigning for underrepresented issues

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected]’17, July 2017, Washington, DC, USA© 2019 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/10.1145/nnnnnnn.nnnnnnn

and communities with hashtag activism. It lead to stronger voicesand awareness in mainstream media as in the cases of #metoo [31]and #blacklivesmatter [12] movements.

The act of retweeting undeniably plays a crucial role in these in-formation dissemination and campaign building processes on Twitter[9]. When Twitter users want to re-post or share some other users’content in their own profile, they simply use the retweet functionalityof the platform. This simple mechanism has given users capabilityto share posts of others they like with their own followers. Its use,however, has reached beyond a simple intent to share when ana-lyzed at broader scale. Scholars present numerous anecdotal findingsshowing the connection between the use of this functionality andthe political homophily [16, 17] whereas Barbera et al. [4] presentevidence for the cases where cross-camp interactions are presentsuch as in the Boston bombing, Winter olympics, and Super Bowlevents. However, the overall polarization on major political issues onTwitter has risen between 10% and 20% [24] over the last decade.

Positive and controversial aspects aside, wide accessibility ofsocial media also attract malign use of these platforms at multiplelevels. Researchers and data journalists investigate and report severalcases of abuse including but not limited to the issues of cyberbul-lying [28], anti-vaccination [33], ISIS propaganda [21], and whitesupremacist [34] propaganda. In the majority of the malign cases,automated accounts (a.k.a. bots) are found to be playing a signifi-cant role as well [11, 22, 36]. In this work, we focus on automatedaccounts’ role in political polarization on Twitter retweet networks.We quantify the polarization impact that automated accounts induceand its textual, emotional, and behavioral correlates. To the bestof our knowledge, this is the first work that tackles the problem ofmeasuring the polarization impact of automated activity on socialmedia.

Our investigation proceeds in two-folds; on synthetically gen-erated networks and on a real-world social media network. First,we set up synthetically generated network scenarios to evaluate therobustness of our experimental design. Second we focus on a Twitterdataset that span the 5 weeks period following the tragic StonemanDouglas School shooting. In synthetic scenario, we (1) produce po-larized user networks, (2) quantify the polarization, and (3) measurethe impact of random node removals on polarization. We find noevidence that random removals significantly affect the polarizationmeasurements on synthetically generated networks. This findingmotivates us to employ a similar removal experiment on a real-worldTwitter dataset. Using the Twitter dataset, we build retweet networksat hashtag and aggregate levels. We show that removing automatedaccounts from retweet networks significantly reduces the polariza-tion while random removals do not.

Page 2: Measuring the Polarization Effects of Bot Accounts in the ...

Conference’17, July 2017, Washington, DC, USA Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu

Polarization effect in retweet network is prevalent due to the auto-matically generated content’s appeal without any doubt. If automatedactivity was not getting any engagement from other users, we wouldnot observe any significant change in the polarization of the retweetnetwork. To carry out an investigation on textual and user profilecorrelates of engagement levels that automated accounts attract, wedevelop a zero inflated negative binomial regression task on retweetcounts. We report four factors predicting increased diffusion ratesfor bot content; use of memes/videos (by an IRR factor of %59), useof LIWC they category keywords (by %18), moral-emotional words(by %11), and bot accounts’ follower count (by %0.01)

We further investigate the effects of self-disclosure as an auto-mated account. We find no evidence that self-disclosing automatedaccounts contribute to polarization. Furthermore, we also observea 10-fold decrease in engagement levels with self-disclosing auto-mated accounts by human users. These self-identifications are madeapparent to human users either at profile name or screen_name at-tributes. Pairwise engagement ratio of human controlled accountswith self-identifying automated accounts (0.1154) is overtly lowerthan the engagement with undisclosed automated accounts (1.3001).

We structure the rest of the paper as follows. In the next section,we provide a brief background on related literature. In Section 3,we introduce the methodologies we adopt and extend from previouswork on network science. In Section 4, we present results with syn-thetic network and Twitter experiments. In Section 5, we summarizeour major findings and provide future research directions.

2 RELATED WORKOur work is inspired by the previous research on online networks,political polarization, and prevalence and impact of automated activ-ity on social media platforms. In this section, we briefly discuss ourconnection points to previous literature on these three subjects.

In the Logic of Connective Action [5], Segerberg et al. suggestexplaining action networks in three broad categories; self-organizingconnective action networks, organizationally enabled connectiveaction networks, and organizationally brokered collective actionnetworks. They characterize these three network types on a spec-trum of organizational coordination, one extreme being little tono organizational coordination and the other extreme being strongto full organizational coordination. Recent history has seen an up-surge of the first two types partially thanks to the advancement ofcommunication technologies. As discussed by numerous scholarspreviously[13, 15, 25], Zapatista movement of early 90s epitomizethe connective action phenomenon. In [25], Garrido et al. character-ize EZLN’s online network presence (http://www.ezln.org/) and howit shapes the international support network of the group.

Since then, researchers conduct multitudes of observational stud-ies around connective action networks. Agarwal et al. [3] analyzethe role of Twitter in occupy protests by suggesting a theoreticalframework and analyzing Twitter streams during the protests. Sev-eral other studies present evidence for the positive role and pitfalls ofsocial media use in community building[14, 44, 45]. Tufekci pointsout the fragile and ephemeral nature of these social media fueledconnective action networks in multiple anecdotes [39]. She marksthe impetus role of social media in large crowds getting together

without as much effort as it would take with more traditional grass-roots campaigning. Regardless of its role in connective action, newinformation technologies are proven to have a measurable impactthrough natural experiments on voter turn-out [30], political choiceof undecided voters [20], or money donation [7].

With the advent of social media and its relevancy in politicalcommunication, a phenomenon called automated account (a.k.a. botaccount) came under a spotlight. Recently, numerous studies dis-close their existence and impact on social media. Varol et al. [42]characterize the detection of these types of accounts and their inter-actions with human controlled accounts on Twitter. Stella et al. [36]study the automated accounts’ behavior in the 2017 Catalan indepen-dence referendum. They show that automated accounts deliberatelytarget central hubs with inflammatory content for traction from thegeneral public. Ferrara et al. [22] disclose a flock of social bots ina misinformation campaign during the 2017 presidential electionseason in France. Shao et al. [35] present evidence of higher activityby automated accounts in spreading low credibility news sources onTwitter. Very recently, Lou et al. [29] develop a model of informationspreading with agents having limited attention and how automatedactivity can easily overshadow deliberate democratic exchange ofinformation on these social media platforms.

Retweeting is a widely adopted action form of Twitter users inthe platform for encouraging political participation such as donationor protest [9]. Political polarization in this behavior has been foundto be an imminent component. Early studies by Adamic et al. [2]explore the divided nature of republican and democrat blogs in theblogosphere. Along the same vein, Conover et al. [17] identify thepolarized nature of retweet networks of Twitter among democratsand republicans in the U.S. Weber et al. [46] also identify a similarpolarized behavior among secularist and islamist Twitter users inEgypt. To quantify the level of polarization, Garimella et al. [23]suggest a random walk based polarization metric for political hash-tags in retweet networks of Twitter. In this work, we also utilizeGarimella et al.’s[23] approach with a slight modification.

We study these three prevalent phenomenon; namely, online net-works (connective action), political polarization, and automatedactivity jointly on Twitter in the unfolding of Parkland school shoot-ing event. We aim to measure the impact of automated activity onpolarization of the endorsement (retweet without edit) networks.

3 METHODOLOGYOur methodology consists of several essential pieces to ensure therobustness of our measurements on the polarization impact of theautomated accounts. Despite our main experiments are conductedon Twitter data, (1) we extend a directed scale-free graph model togenerate polarized networks and test the stability of the polariza-tion metric we utilize on the generated synthetic graph. Once thestability of the polarization metric is established, (2) we crawl amonth of Twitter data associated with the Parkland school shoot-ing incident to construct our main dataset. (3) We assign politicallabels to Twitter users with the help of a supervised classificationtask, then (4) identify the automated accounts using a third-partystate-of-the-art bot detection tool. From this labeled network, (5) wemeasure the polarization of the whole network, and the polarizationof the network with random accounts removed, and the polarization

Page 3: Measuring the Polarization Effects of Bot Accounts in the ...

Measuring the Polarization Effects of Bot Accounts in the U.S. Gun Control Debate on Social Media Conference’17, July 2017, Washington, DC, USA

Synthetic Network Model

Complete Synthetic Network Nodes Removed Randomly

KS 2-Sample TestPolarization Difference

Twitter Dataset

Bot

Bot

Bot

Bot

Bot

Complete Retweet Network

KS 2-Sample TestPolarization Difference

Bot

Bot

Bot

Bot

Nodes Removed Randomly

Bots Removed

Figure 1: Our methodology in measuring the effect of automated accounts. First, we validate our experimental setup with synthet-ically generated directed polarized scale free networks. Second, we build the retweet network of Twitter dataset and compare thepolarization of complete retweet network with a network that misses randomly selected nodes. Third, we compare the polarization ofcomplete retweet network with the retweet network that automated accounts are removed.

of the network with bot accounts removed. A comparative analysisbetween the polarization of these three networks quantifies the effectof the automated accounts on the ecosystem.

To provide a more granular context and understanding, we con-duct content analysis and focus on observational impact differencesof varying types of automated accounts (e.g. self-identifying auto-mated accounts).

3.1 Generating Synthetic Polarized NetworksWe initialize a synthetic network with two separate Erdos-Rényirandom network models. These initial two sub-networks correspondto the initial stages of two political sides. Then, we adopt a directedscale free graph model [8] and modify it to be able to generate polar-ized networks. Details of the algorithm can be seen in Algorithm 1.Notice that our contribution to [8] is the polarization parameter ρ.When the model connects two nodes to each other (new or old), it de-preciates the effect of the indegrees and outdegrees of the nodes thatare at the opposite side of political spectrum by ρ. Thus, nodes showpolitical homophily in their connections besides their preferentialattachment to higher degree nodes. With the help of the polarizationparameter ρ, we are able to generate scale free directed networks indifferent levels of polarization. Later, we use this model to test ourhypothesis on synthetically generated polarized networks.

Algorithm 1 Synthetic Polarized Network Generation

1: Input: α , β,γ ,δin ,δout ,N , ρ, where α+β+γ = 1 and 0 ≤ ρ ≤ 12: Output: G3: GL ,GR ← Erdos–Renyi()4: G ← GL ∪GR5: while |G | < N do6: with probability α :7: Draw a side from {L, R } for a new node v .8: Add the node v and an edge to an existing node w from v , where

w is chosen according to din + δin for w ’s in the same side with v ,according to ρ ∗ (din + δin ) , otherwise.

9: with probability β :10: Add an edge from an existing node v to an existing node w , where

v and w are chosen independently, v according to dout + δout and waccording to din + δin if v and w are in the same side, according toρ ∗ (din + δin ) , otherwise.

11: with probability γ :12: Draw a side from {L, R } for a new node v .13: Add a node v and an edge from an existing node w to v , where w

is chosen according to dout + δout for w ’s in the same side with v ,according to ρ ∗ (dout + δout ) , otherwise.

14: end while

Page 4: Measuring the Polarization Effects of Bot Accounts in the ...

Conference’17, July 2017, Washington, DC, USA Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu

3.2 Quantifying PolarizationMeasuring the impact of bot accounts on network polarization re-quires us to quantify the polarization of a given network precisely.To this end, we refer to a recent study[23] and adopt their randomwalk controversy (RWC) score.

RWC = PLL(+)PRR(+) − PLR(+)PRL(+) (1)

where PLL(+) is probability of a random walk starting from any leftnode (L) ending up at a central left node (L(+)). Similarly PRR(+) isprobability of starting on any right node and ending on a central rightnode. PLR(+) , PRL(+) follow the same definition and quantifies theprobability of a walk crossing sides. To compute the aforementionedprobabilities, Garimella et al. [23] suggest a simple Monte Carlosampling of random walks over network. After having samples ofwalks they quantify the probabilities PLL(+) ,PRR(+) ,PLR(+) ,PRL(+) asfollows;

PLL(+) =CLL(+)

CLL(+)CLR(+)PRR(+) =

CRR(+)

CRR(+)CRL(+)

PLR(+) =CLR(+)

CLR(+)CLL(+)PRL(+) =

CRL(+)

CRL(+)CRR(+)

where C stands for the count of walks falling into certain previouslydefined types. The RWC polarization metric returns values between+1(perfect polarization) and -1(no polarization).

3.3 User ClassificationTo decide the left and right side of the RWC algorithm, we developa political classification task. To be able to set up a classificationtask, we first acquire third party intelligence from a crowd-sourcingplatform that indicates the political leanings of news domains. Wecrawl news domains’ political scale (left, center-left, center-right,right) from mediabiasfactcheck.org. This procedure equips us with1,241 news domains and their political labels.1

Various studies have shown that social media users’ political newsdiet is highly clustered according to their political leaning [1]. Wealso adopt a similar heuristic and label social media users based ondomains of news articles they share. We execute a simple majorityvoting for each user based on what they share in their social mediaposts. We use -2, for the left domains, -1 for the center-left domains,+1 for the center-right domains, and +2 for the right domains. Wekeep users having cumulative values greater than +2 and less than-2 as our training dataset. It provides us around 80K social mediaaccounts and their 7M tweets labeled as left or right.

After garnering labeled social media accounts, we develop twoseparate classification tasks for classifying the rest of the users. Notethat these users have not shared enough news articles for us to assesstheir political ideology. First, we use a label propagation algorithmon the retweet network informed by [18]. Second, we develop severaltext-based classification tasks and report each classifier’s accuracywith five-fold cross validation in Table 1.

Given the superior performance of label propagation algorithm,for the rest of the paper we build our analysis upon its results. Wealso make the analysis built upon the best performing text-basedclassifier available for interested readers in the Appendix A.

1https://github.com/botpolarization/botpolarization/raw/master/media_scales.zip

F1-Macro Accuracy

TextRandom Forest 0.4438 0.7964GBM 0.6433 0.8403Logistic Regression 0.9101 0.9441

Network Label Propagation 0.9552 0.9715

Table 1: Bag-of-words based and network based classificationperformances.

3.4 Automated Account DetectionTo detect the automated accounts in our dataset we register to theBotometer API provided by Indiana University [19].2 We query arandom sample of 260K accounts from our dataset. We tag accountswho have a score over 0.5 as automated and the rest as not automated.The API returns 25K accounts flagged as automated (%10) whichagrees with the previous literature’s findings on the prevalence ofautomated accounts on social media [42].

3.5 Measuring the ImpactTo measure the impact of automated accounts on network polar-ization, we set up an experiment on real and synthetic networksas follows. First, we compute the polarization of complete retweetnetwork. We run the RWC algorithm 1,000 times and report the dis-tribution of polarization scores. Second, we compute the polarizationof the sub-network without any automated accounts. We also run theRWC algorithm 1,000 times and report the distribution. Finally, wecompute the polarization of the sub-network which is acquired by re-moving number of nodes equal to the number of automated accountsrandomly. We run the RWC algorithm 1,000 times also and reportthe distribution. Then, we compare these three distributions pairwiseand report the significance results of Kolmogorov-Smirnov 2-sampletest. In our application, Kolmogorov-Smirnov test [32] assesses iftwo measured polarization score distributions come from differentmeans and variances of underlying polarization distributions.

4 EXPERIMENTAL RESULTSWe branch our analysis into two distinct sets of experiments. First,we set up experiments with artificially generated polarized networks.Second, we set up experiments on our focus study; Twitter datasetregarding the unfolding and aftermath of Stoneman Douglas HighSchool shooting event.

4.1 Validating the Experimental SetupIn this set of experiments, we generate synthetic polarized networksemulating the retweet network of Twitter. First, we report the inter-play of random walk controversy(RWC) score with the polarizationparameter ρ of previously introduced variant of directed scale freenetwork model. We also check if RWC score is robust to the net-work size changes. Our search space spans polarization parameter ρvalues between 0.01 and 0.1 with increments of 0.01. It also spansnumber of nodes between 10,000 and 200,000 with increments of10,000. We generate the synthetic networks based on the given ρ

2https://botometer.iuni.iu.edu

Page 5: Measuring the Polarization Effects of Bot Accounts in the ...

Measuring the Polarization Effects of Bot Accounts in the U.S. Gun Control Debate on Social Media Conference’17, July 2017, Washington, DC, USA

Figure 2: Polarization Score Measurements under varying syn-thetic network generation parameters. Notice the invariance ofthe polarization metric to the size of network.

and number of nodes and with Erdos–Rényi random network having100 nodes on each side with 0.33 edge probability.

We also experiment with parameters beyond what we report here,but for the sake of brevity of our paper, we only report results whichspans the neighborhood of mean polarization score of our Twitterdataset (µRWC = 0.9067).

4.1.1 RWC Score on Synthetic Polarized Networks. By tweak-ing the polarization parameter ρ and number of nodes parameters ofthe synthetic polarized network generation model, we generate 200networks in various sizes and polarization levels. We compute the po-larization score distribution for each by running the RWC algorithm1,000 times on them. We observe two main patterns in our exper-iments. (1) Suggested polarization scoring algorithm (RWC) is instrong linear correlation with the ρ (MSE= 3.7702e−5, R2= 0.9889,pval< 0.001). (2) Change in number of nodes of networks do notprovide strong evidence for the polarization score of the underly-ing configuration (MSE= 0.0034 , R2= 8.0824e−6, pval= 0.2105).Results can further be investigated visually in Figure 2.

4.1.2 RWC Score after Random Node Removals. Assessingthe impact of node removal on network polarization is crucial in ourstudy. In this subsection, we evaluate the impact of node removalson the polarization of various synthetically generated networks. Toemulate the observational data, we experiment with removing 0.5%and 1% of nodes from networks. Note that the impact we measureon our observational Twitter dataset is already incurred by less than0.1% of the accounts of the complete retweet network. Our searchspace spans 400 experiment configurations derived from 20 networksizes, 10 polarization levels, and 2 removal rates.

Figure 3 shows the results of our experiments that networks withvarious levels of polarization do not experience significant polar-ization change when nodes are removed randomly from them. Outof 400 removal experiments, 388 of our experiments fail to rejectthe null hypothesis that polarization(RWC score) changes whennodes are removed randomly (Kolmogorov-Smirnov 2-sample test

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1ρ

0.75

0.80

0.85

0.90

0.95

1.00

Polarization(RWC)

remove

Complete Network

0.5% removed

1.0% removed

Figure 3: When nodes are randomly removed from syntheticnetworks. Notice the insignificance of the removals in overallpolarization at various polarization (ρ) levels.

pval> 0.005). Only seven of the experiments present decreased polar-ization while five of them exhibit increased polarization. We reportthe average polarization change as 0.0006 and the standard deviationof it as 0.0002 among 400 random node removal experiments. Ourresults can be accessed and replicated through the publicly avail-able scripts at https://github.com/botpolarization/botpolarization/tree/master/code/synthetic_network_generation.

4.2 Measuring the Impact of Automated Accounts

Figure 4: Retweet network during and aftermath of Parklandshooting. Light blue is automated activity. Dark blue is the ac-tivity of users classified as left-leaning and red as right-leaning.Layout is computed by Gephi’s ForceAtlas2 algorithm.

4.2.1 Twitter Dataset & Preprocessing. Our dataset collectionincludes 3.7M users and their 25M tweets posted between February1, 2018, and March 6, 2018. We obtained the dataset from GNIPTwitter by requesting tweets that contain any of the 140 words,subwords, and bigrams listed at https://github.com/botpolarization/botpolarization/raw/master/GNIP_query_list.txt. We build retweetwithout edit network by compiling a network of 3.3M nodes and

Page 6: Measuring the Polarization Effects of Bot Accounts in the ...

Conference’17, July 2017, Washington, DC, USA Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu

(a) Automated Accounts Removed

(b) Accounts Removed Randomly

(c) Retweet transi-tions between typesof accounts.

Figure 5: Figure 5a depicts the difference in polarization be-tween complete retweet network (red) and when automated ac-counts removed (grey) from it. Figure 5b depicts the differencein polarization when same amount of nodes removed randomly.

16M edges. We make the tids and uids of the corresponding retweetnetwork publicly available 3.

We detect 25K automated accounts through Botometer API[19].We note that this is not the comprehensive list of automated accountsin our dataset as we are constrained by the Twitter API. So, we query260K accounts due to these resource limitations. The effects wemeasure represents only a portion of the automated activity in ourdataset.

We use a label propagation approach to classify users’ politicalleanings as discussed in the classification section under methodologyand assess 3M left-leaning and 300K right-leaning Twitter users.

4.2.2 Overall Network Polarization Change. The major findingof this study is as follows; when automated accounts are removedfrom the retweet network of Twitter activity relating to the unfoldingand aftermath of Parkland shooting event, polarization between left-leaning and right-leaning accounts decrease. When the same numberof accounts removed randomly from the network the overall polar-ization score does not get affected significantly. More precisely, thedifference between polarization measurements of complete retweetnetwork and the network from which automated accounts removedis 0.0025. The same analysis yields 0.0007 polarization differencewhen done with random removals; in other words, approximately 3.5times less difference. The finding can be observed from Figures 5a

3https://github.com/botpolarization/botpolarization

and 5b. For possible explanations of this phenomena, we investigateour observational data further in the following sections.

Figure 5 presents the overall retweeting interaction between au-tomated and not automated accounts. 160K retweets are initiatedby 23K not-automated accounts towards 1.5K automated accounts,while 123K retweets initiated by only 7K automated accounts to-wards 5K not-automated accounts. This signals a hyper-active auto-mated account activity to promote not-automated accounts’ tweetsthrough retweeting. On the other hand, retweets acquired by auto-mated accounts from not-automated accounts is greater in volumethan the other way around (160K>123K). Indeed, if the automatedactivity was not getting any traction, it would not affect the RWCscore, and the impact would not be at measurable levels.

4.2.3 Hashtag-Level Network Polarization Change. Hashtagsare popular semantic atomic units that serve as topical hubs onTwitter. In this section, we extend our analysis to a lower granularitylevel and report hashtag level polarization impact of automatedactivity. First, we build 100 retweet networks of most participatedhashtags from both political leanings. To quantify the participationfrom left and right sides, we use harmonic mean of the counts ofusers from both political sides 2 |L | |R |

|L |+ |R | . These retweet networks ofmost participated 100 hashtags span 80% of the total retweet activitycontaining at least a single hashtag in it, and 26% of the completedataset.

Second, we measure the RWC score distribution of each hashtagnetwork. Similar to the previous analysis, we remove automatedaccounts from the network and measure the RWC score distribu-tion again. We analyze the most participated 100 hashtags’ retweetnetworks. We find that majority of them experience reduced polar-ization when automated accounts are removed from the network.In particular, 65 of the hashtags presents a decrease in polarizationwhen automated accounts are removed. Seven of them do not experi-ence any statistically significant change and 28 experience increasein polarization. This finding also implies that automated activity,most of the time, has a significant role in creating more polarizedendorsement networks at hashtag level.

Even though the majority of the hashtags (65%) experience adecrease in polarization when automated accounts removed, we stillobserve a heterogeneity in their impact (the other 28%). This opensup a future direction for us and other researchers to study if there isany correlation between the increase/decrease type of impact andthe properties of the hashtag (e.g. semantic, political leaning, oremotion). Figure 6 presents a brief summary of our findings (themost participated 10 hashtags’ change) as a ridge plot. We also reporta similar change at hashtag-level with text-based political leaningclassification of users in the appendix A.

4.2.4 Content Analysis. So far, we have presented the networkpolarization impact of automated accounts. This impact is measur-able mainly due to the retweets that automated accounts are able tofetch. In this section, we focus on the predictors of diffusion (retweetcounts) of automated accounts’ tweets. Predictors of diffusion insocial media is a widely studied topic for non-automated scenarios.To build on the existing literature, our interest is to investigate thepredictors of diffusion for automated accounts in our dataset usingthe previously identified factors in related works.

Page 7: Measuring the Polarization Effects of Bot Accounts in the ...

Measuring the Polarization Effects of Bot Accounts in the U.S. Gun Control Debate on Social Media Conference’17, July 2017, Washington, DC, USA

Figure 6: The effect of automated accounts on the hashtags thatattracted the highest participation from two sides. Red distribu-tion represents the polarization of complete retweet network,and gray distribution represents the network’s without auto-mated accounts.

Next, we list the previously reported predictive factors [6, 10, 37]of diffusion and investigate their effectiveness when employed byautomated accounts. In our dataset, we have 102,393 tweets postedby automated accounts. We design a negative binomial regressiontask with zero inflation to address the over-dispersion in our dataset.Our regression’s target variable is retweet count, and our predictorvariables are;

• meme/video quantifies how many memes or videos are em-bedded in the tweet,• mention_count quantifies how many user handles are in the

tweet,• followers_count quantifies how many followers the automated

account has• ’us’ quantifies how many times a tweet contains the word we

or variants defined by LIWC dictionary,• ’them’ quantifies how many times a tweet contains the word

they or variants defined by LIWC dictionary,• moral-emotional quantifies how many times a moral-emotional

word appears in the tweet. The word list is comprised of theintersection of moral words and emotional words dictionaries[10].• emotional-only quantifies how many times an emotional word

appears in the tweet. The word list is comprised of the dis-tinctive emotional words that are not in the moral wordsdictionary at the same time.• moral-only quantifies how many times a moral word appears

in the tweet. The word list is comprised of the distinctivemoral words that are not in the emotional words dictionary atthe same time.

For our implementation, we use JMP Pro’s Generalized Regres-sion tool with zero inflated negative binomial regression task. Ourregression analysis yields meme/video to be the most prominentpredictor. It has the highest estimate (0.4675 ± 0.0187) among our

IRR Lower 95% Upper 95%meme/video*** 1.5963 1.5388 1.6558them*** 1.1833 1.1431 1.2249moral-emotional*** 1.1105 1.0741 1.1480emotional-only*** 1.0762 1.0589 1.0938followers_count*** 1.0001 1.0001 1.0001us 1.0030 0.9726 1.0339mention_count*** 0.9831 0.9777 0.9885moral-only*** 0.9512 0.9313 0.9715url_count*** 0.5522 0.5350 0.5699***p < 0.0001

Table 2: Incidence Rate Ratios (IRR) derived from zero inflatednegative binomial regression. Positive and negative predictorsof retweet count are reported with their confidence intervals.

(a)(b)

Figure 7: Retweeting transitions between not automatedand self-describing automated accounts. Notice the insignif-icance of the change in polarization change when self-identifying automated accounts are removed from the retweetnetworkpval > 0.05. Red distribution represents the polariza-tion of complete retweet network and the grey represents whenthe self-identifying accounts removed.

predictors aligning with previous studies which also report the im-portance of visual media in the virality of tweets [37]. Second mostprominent predictor is ’them’. Use of the word and its variants inLIWC dictionary increase the ratio of retweet count by 18.33%.Following previous literature on persuasive political communicationstudies [26, 27], success of blaming the other is a prominent phe-nomenon in the age of political populism coupled with social media.Furthermore, in alignment with previous works [10, 41], we findthat Moral-Emotional words contribute (0.1048 + −0.0170) more toretweet count than Emotional-Only words (0.0734 ± 0.0083).

We report that moral-only words have a small negative predictorcoefficient (−0.0501 ± 0.0108) alongside mention_count (−0.0171 ±0.0028). However, change of one unit in url count decreases thechance of a tweet to be retweet almost by half. For more detailedinformation about the incidence rate ratios of the independent vari-ables, readers can refer to Table 2. In the light of these observations,we argue that the impact automated activity incurs are mostly inalignment with previous findings on the characteristics of engagingsocial media content.

Page 8: Measuring the Polarization Effects of Bot Accounts in the ...

Conference’17, July 2017, Washington, DC, USA Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu

4.2.5 When Automation is Self-Identified. In the previous ex-periments, we demonstrate the significant effects of the automatedaccounts on polarization. We hypothesize that the utilization of asimple, explicit indicator for an account being automated may bean effective way to prevent unintended polarization impact of auto-mated accounts. To test our hypothesis on our observational data,we distinguish the automated accounts that explicitly uses the words"bot", "robot", or "chatbot" in their screen names or profile names(publicly visible account attributes) from the rest using the followingregular expressions:

• ∗_chatbot• ∗ chatbot• chatbot_∗• chatbot ∗• [ ]+Chatbot

• ∗_robot• ∗ robot• robot_∗• robot ∗• [ ] + Robot

• ∗_bot• ∗ bot• bot_∗• bot ∗• [ ] + Bot

We determine 1,802 self-identifying automated accounts match-ing these regular expressions. Figure 7b shows the retweet inter-actions between human-controlled accounts and self-identifyingautomated accounts. While human-controlled accounts retweetedself-identifying automated generated content 1.5K times, the oppo-site transition happened 13K times indicating only 12% reciprocalengagement from humans to self-identifying bots. This discrepancyis notable especially when compared to all human-controlled andautomated accounts interactions in Figure 5, where the relative en-gagement from human-controlled accounts is 130% (160K/123K).Thus, self identification clearly changes the dynamics in terms ofhuman-automated account interactions.

Following this observation, we repeat the node-removal experi-ment described in previous section, this time with 1,802 self-identifyingautomated accounts instead of all automated accounts. As can be ob-served in Figure 7a, removal of self-identifying automated accountsfrom the network do not result in a statistically significant changein polarization. We hope that this finding further motivates ongoingautomated activity detection research and suggest a way to eliminateautomated accounts’ unintended impact on social media platforms.

5 CONCLUSIONIn this work, we provide an anecdotal evidence that automated ac-tivity in a political subset of Twitter contributes to more polarizeddiffusion networks during the unfolding and aftermath of a massshooting event. We find that the predictors of engagement to auto-mated activity generally overlap with previous findings on generalcharacteristics of engaging content on social media. Another es-sential conclusion is that when the automated nature of a socialmedia account is self-identified or revealed, this polarization effectis no longer present. We hypothesize based on our observationaldataset that human-controlled accounts refrain from engaging withautomated accounts when they are in the know of the other sides’automated nature. As a future work, we plan to conduct similar im-pact analysis with datasets collected from other countries, languages,event characteristics and report if the polarization impact persists.

A RESULTS WITH TEXT-BASED POLITICALLEANING CLASSIFICATION

In this short appendix, we present our findings on the impact ofautomated accounts when political leaning classification is executedwith text-only features. We use the labels acquired through the bestperforming text-based classifier in Table 1; logistic regression. Inthe complete retweet network, automated activity has 44 times moreimpact in polarization than the random effect (Figure 8). Notice thatthis impact of increase in polarization is much higher than whatwe report in the main text (three times). Furthermore, we find that84% of the most popular debate related hashtags experience anincrease in polarization with automated activity (Figure 9). Overall,the measured polarization impact of automated accounts amongleft and right leaning accounts is robustly evident approved by twofundamentally different political leaning classification approaches.We also note that, when we repeat our analysis on self-disclosingautomated accounts with text-based labels, we again find no evidencethat they contribute to polarization (pval : 0.2575, ∆polar : 0.0005).

(a) Automated accounts removed (b) Accounts removed randomly

Figure 8: Polarization impact when political leaning of an ac-count is classified using a text based classifier.

Figure 9: The effect of automated accounts on the hashtags thatattracted the highest participation from both political leaningswhen political leanings of accounts are classified through a text-based classifier.

Page 9: Measuring the Polarization Effects of Bot Accounts in the ...

Measuring the Polarization Effects of Bot Accounts in the U.S. Gun Control Debate on Social Media Conference’17, July 2017, Washington, DC, USA

REFERENCES[1] 2014. Political Polarization and Media Habits. Pew Research Cen-

ter (2014). https://www.pewresearch.org/wp-content/uploads/sites/8/2014/10/Political-Polarization-and-Media-Habits-FINAL-REPORT-7-27-15.pdf

[2] Lada A. Adamic and Natalie Glance. 2005. The Political Blogosphere and the2004 U.S. Election: Divided They Blog. In Proceedings of the 3rd InternationalWorkshop on Link Discovery (LinkKDD ’05). ACM, New York, NY, USA, 36–43.DOI:http://dx.doi.org/10.1145/1134271.1134277

[3] Sheetal Agarwal, W. Lance Bennett, Courtney Johnson, and Shawn Walker. 2014.A Model of Crowd Enabled Organization: Theory and Methods for Understandingthe Role of Twitter in the Occupy Protests. International Journal of Communica-tion 8, 0 (2014). https://ijoc.org/index.php/ijoc/article/view/2068

[4] Pablo BarberÃa, John Jost, Jonathan Nagler, Joshua Tucker, and Richard Bonneau.2015. Tweeting From Left to Right: Is Online Political Communication MoreThan an Echo Chamber? Psychological science 26 (08 2015). DOI:http://dx.doi.org/10.1177/0956797615594620

[5] W. Lance Bennett and Alexandra Segerberg. 2012. THE LOGIC OF CONNEC-TIVE ACTION. Information, Communication & Society 15, 5 (2012), 739–768.DOI:http://dx.doi.org/10.1080/1369118X.2012.670661

[6] Sanmitra Bhattacharya, Padmini Srinivasan, and Phil Polgreen. 2014. Engagementwith Health Agencies on Twitter. PLOS ONE 9, 11 (11 2014), 1–12. DOI:http://dx.doi.org/10.1371/journal.pone.0112235

[7] Bruce Bimber. 2001. Information and Political Engagement in America: TheSearch for Effects of Information Technology at the Individual Level. Politi-cal Research Quarterly 54, 1 (2001), 53–67. DOI:http://dx.doi.org/10.1177/106591290105400103

[8] Béla Bollobás, Christian Borgs, Jennifer Chayes, and Oliver Riordan. 2003. Di-rected Scale-free Graphs. In Proceedings of the Fourteenth Annual ACM-SIAMSymposium on Discrete Algorithms (SODA ’03). Society for Industrial and AppliedMathematics, Philadelphia, PA, USA, 132–139. http://dl.acm.org/citation.cfm?id=644108.644133

[9] D. Boyd, S. Golder, and G. Lotan. 2010. Tweet, Tweet, Retweet: ConversationalAspects of Retweeting on Twitter. In 2010 43rd Hawaii International Conferenceon System Sciences. 1–10. DOI:http://dx.doi.org/10.1109/HICSS.2010.412

[10] William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J.Van Bavel. 2017. Emotion shapes the diffusion of moralized content in socialnetworks. Proceedings of the National Academy of Sciences 114, 28 (2017),7313–7318. DOI:http://dx.doi.org/10.1073/pnas.1618923114

[11] David A. Broniatowski, Amelia M. Jamison, SiHua Qi, Lulwah AlKulaib, TaoChen, Adrian Benton, Sandra C. Quinn, and Mark Dredze. 2018. WeaponizedHealth Communication: Twitter Bots and Russian Trolls Amplify the VaccineDebate. American Journal of Public Health 108, 10 (2018), 1378–1384. DOI:http://dx.doi.org/10.2105/AJPH.2018.304567 PMID: 30138075.

[12] Nikita Carney. 2016. All Lives Matter, but so Does Race: Black Lives Matter andthe Evolving Role of Social Media. Humanity & Society 40, 2 (2016), 180–199.DOI:http://dx.doi.org/10.1177/0160597616643868

[13] Manuel Castells. 1996. Rise of the Network Society: The Information Age: Econ-omy, Society and Culture. Blackwell Publishers, Inc., Cambridge, MA, USA.

[14] Meredith Clark. 2014. To Tweet Our Own Cause: A Mixed-Methods Analysisof the Online Phenomena Known as Black Twitter: a thesis presented in partialfulfilment of the requirements for the degree of Doctor of Philosophy in Schoolof Media and Journalist at University of North Carolina at Chapel Hill. Ph.D.Dissertation. University of North Carolina at Chapel Hill.

[15] Harry M. Cleaver. 1998. The Zapatista Effect: The Internet and the Rise of anAlternative Political Fabric. Journal of International Affairs 51, 2 (1998), 621–640.http://www.jstor.org/stable/24357524

[16] Elanor Colleoni, Alessandro Rozza, and Adam Arvidsson. 2014. Echo Cham-ber or Public Sphere? Predicting Political Orientation and Measuring PoliticalHomophily in Twitter Using Big Data. Journal of Communication 64 (03 2014).DOI:http://dx.doi.org/10.1111/jcom.12084

[17] Michael Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Gonçalves,Alessandro Flammini, and Filippo Menczer. 2011. Political Polarization onTwitter. In Proc. 5th International AAAI Conference on Weblogs and Social Media(ICWSM). http://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2847

[18] M. D. Conover, B. Goncalves, J. Ratkiewicz, A. Flammini, and F. Menczer.2011. Predicting the Political Alignment of Twitter Users. In 2011 IEEE ThirdInternational Conference on Privacy, Security, Risk and Trust and 2011 IEEEThird International Conference on Social Computing. 192–199. DOI:http://dx.doi.org/10.1109/PASSAT/SocialCom.2011.34

[19] Clayton Allen Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, andFilippo Menczer. 2016. BotOrNot: A System to Evaluate Social Bots. In Pro-ceedings of the 25th International Conference Companion on World Wide Web(WWW ’16 Companion). International World Wide Web Conferences Steer-ing Committee, Republic and Canton of Geneva, Switzerland, 273–274. DOI:http://dx.doi.org/10.1145/2872518.2889302

[20] Robert Epstein and Ronald E. Robertson. 2015. The search engine manipulationeffect (SEME) and its possible impact on the outcomes of elections. Proceedingsof the National Academy of Sciences 112, 33 (2015), E4512–E4521. DOI:http://dx.doi.org/10.1073/pnas.1419828112

[21] James P. Farwell. 2014. The Media Strategy of ISIS. Survival 56, 6 (2014), 49–55.DOI:http://dx.doi.org/10.1080/00396338.2014.985436

[22] Emilio Ferrara. 2017. Disinformation and Social Bot Operations in the Run Upto the 2017 French Presidential Election. First Monday 22 (06 2017). DOI:http://dx.doi.org/10.5210/fm.v22i8.8005

[23] Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and MichaelMathioudakis. 2018. Quantifying Controversy on Social Media. Trans. Soc.Comput. 1, 1, Article 3 (Jan. 2018), 27 pages. DOI:http://dx.doi.org/10.1145/3140565

[24] Venkata Rama Kiran Garimella and Ingmar Weber. 2017. A long-term analysis ofpolarization on Twitter. In Proceedings of the 11th International Conference onWeb and Social Media, ICWSM 2017. AAAI press, 528–531.

[25] Alexander Halavais and Maria Garrido. 2003. Mapping networks of support forthe Zapatista movement. Cyberactivism: Online activism in theory and practice.London: Routledge (01 2003), 165–184.

[26] Michael Hameleers, Linda Bos, and Claes H. de Vreese. 2017. âAIJThey DidItâAI: The Effects of Emotionalized Blame Attribution in Populist Communication.Communication Research 44, 6 (2017), 870–900. DOI:http://dx.doi.org/10.1177/0093650216644026

[27] Michael Hameleers and DesirÃl’e Schmuck. 2017. ItâAZs us against them: acomparative experiment on the effects of populist messages communicated viasocial media. Information, Communication & Society 20, 9 (2017), 1425–1444.DOI:http://dx.doi.org/10.1080/1369118X.2017.1328523

[28] Homa Hosseinmardi, Sabrina Arredondo Mattson, Rahat Ibn Rafiq, Richard Han,Qin Lv, and Shivakant Mishra. 2015. Analyzing Labeled Cyberbullying In-cidents on the Instagram Social Network. In Social Informatics, Tie-Yan Liu,Christie Napa Scollon, and Wenwu Zhu (Eds.). Springer International Publishing,Cham, 49–66.

[29] Xiaodan Lou, Alessandro Flammini, and Filippo Menczer. 2019. InformationPollution by Social Bots. (07 2019).

[30] Robert M Bond, Christopher J Fariss, Jason J Jones, Adam D I Kramer, CameronMarlow, Jaime Settle, and James H Fowler. 2012. A 61-Million-Person Experimentin Social Influence and Political Mobilization. Nature 489 (09 2012), 295–8. DOI:http://dx.doi.org/10.1038/nature11421

[31] Lydia Manikonda, Ghazaleh Beigi, Subbarao Kambhampati, and Huan Liu. 2018.metoo Through the Lens of Social Media. In SBP-BRiMS.

[32] Frank J Massey Jr. 1951. The Kolmogorov-Smirnov test for goodness of fit.Journal of the American statistical Association 46, 253 (1951), 68–78.

[33] Tanushree Mitra, Scott Counts, and James Pennebaker. 2016. UnderstandingAnti-Vaccination Attitudes in Social Media. (2016). https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/view/13073

[34] Derek O’Callaghan, Derek Greene, Maura Conway, Joe Carthy, and PádraigCunningham. 2013. An Analysis of Interactions within and between ExtremeRight Communities in Social Media. In Ubiquitous Social Media Analysis, MartinAtzmueller, Alvin Chin, Denis Helic, and Andreas Hotho (Eds.). Springer BerlinHeidelberg, Berlin, Heidelberg, 88–107.

[35] Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Kai-Cheng Yang,Alessandro Flammini, and Filippo Menczer. 2018. The spread of low-credibilitycontent by social bots. In Nature Communications.

[36] Massimo Stella, Emilio Ferrara, and Manlio De Domenico. 2018. Bots increaseexposure to negative and inflammatory content in online social systems. Proceed-ings of the National Academy of Sciences 115, 49 (2018), 12435–12440. DOI:http://dx.doi.org/10.1073/pnas.1803470115

[37] B. Suh, L. Hong, P. Pirolli, and E. H. Chi. 2010. Want to be Retweeted? LargeScale Analytics on Factors Impacting Retweet in Twitter Network. In 2010 IEEESecond International Conference on Social Computing. 177–184. DOI:http://dx.doi.org/10.1109/SocialCom.2010.33

[38] Yannis Theocharis, Will Lowe, Jan W. van Deth, and Gema Garcia-Albacete.2015. Using Twitter to mobilize protest action: online mobilization patterns andaction repertoires in the Occupy Wall Street, Indignados, and Aganaktismenoimovements. Information, Communication & Society 18, 2 (2015), 202–220. DOI:http://dx.doi.org/10.1080/1369118X.2014.948035

[39] Zeynep Tufekci. 2017. Twitter and Tear Gas: The Power and Fragility of Net-worked Protest. Yale University Press, New Haven, CT, USA.

[40] Zeynep Tufekci and Christopher Wilson. 2012. Social Media and the Decisionto Participate in Political Protest: Observations From Tahrir Square. Journal ofCommunication 62, 2 (03 2012), 363–379. DOI:http://dx.doi.org/10.1111/j.1460-2466.2012.01629.x

[41] Sebastian Valenzuela, Martina Pina, and Josefina Ramirez. 2017. BehavioralEffects of Framing on Social Media Users: How Conflict, Economic, HumanInterest, and Morality Frames Drive News Sharing. Journal of Communication67, 5 (08 2017), 803–826. DOI:http://dx.doi.org/10.1111/jcom.12325

Page 10: Measuring the Polarization Effects of Bot Accounts in the ...

Conference’17, July 2017, Washington, DC, USA Mert Ozer, Mehmet Yigit Yildirim, Hasan Davulcu

[42] Onur Varol, Emilio Ferrara, Clayton Davis, Filippo Menczer, and AlessandroFlammini. 2017. Online Human-Bot Interactions: Detection, Estimation, andCharacterization. (2017).

[43] Onur Varol, Emilio Ferrara, Christine L. Ogan, Filippo Menczer, and AlessandroFlammini. 2014. Evolution of Online User Behavior During a Social Upheaval. InProceedings of the 2014 ACM Conference on Web Science (WebSci ’14). ACM,New York, NY, USA, 81–90. DOI:http://dx.doi.org/10.1145/2615569.2615699

[44] Morgan Vigil-Hayes, Marisa Duarte, Nicholet Deschine Parkhurst, and ElizabethBelding. 2017. #Indigenous: Tracking the Connective Actions of Native AmericanAdvocates on Twitter. In Proceedings of the 2017 ACM Conference on ComputerSupported Cooperative Work and Social Computing (CSCW ’17). ACM, New

York, NY, USA, 1387–1399. DOI:http://dx.doi.org/10.1145/2998181.2998194[45] Joanne Helen Waitoa. 2013. E-whanaungatanga: the role of social media in

Maori political engagement: a thesis presented in partial fulfilment of the re-quirements for the degree of Master of Philosophy in Development Studies at TeKunenga ki Purehuroa, Massey University, Palmerston North, New Zealand. Ph.D.Dissertation. Massey University.

[46] I. Weber, V. R. K. Garimella, and A. Batayneh. 2013. Secular vs. Islamist po-larization in Egypt on Twitter. In 2013 IEEE/ACM International Conference onAdvances in Social Networks Analysis and Mining (ASONAM 2013). 290–297.DOI:http://dx.doi.org/10.1109/ASONAM.2013.6785722