CQR Political Polling - SAGE Publications Inc · ing using digital interviewers. OUTLOOK 139 Shades...

24
Political Polling Do polls accurately measure public attitudes? S mart phones, social media and the Internet have made it easier than ever for people to make their views known, but the new technology can make it harder for political pollsters to gather and measure public opinions with precision or consistency. They face public suspicions of partisanship, reluctance to provide candid answers and — as cellphone use grows — difficulty reaching respondents by the traditional method of random calls to household landlines. Meanwhile, critics charge that the news media pay too much attention to “horse-race” polls showing who leads in political races and not enough to candidates’ policy ideas. The 2014 elections, in which pollsters miscalled the results of a number of closely watched races, cast a harsh spotlight on the industry, but pollsters contend their record has improved over the years. Some experts see promise in the increasing use of “opt-in” polls such as those on the Internet, but the approach is controversial. Statistician Nate Silver predicted Barack Obama would win the 2012 presidential election in his FiveThirtyEight.com blog, which aggregates polls rather than relying on a single survey. Critics say media and pollsters put too much emphasis on the so- called “horse race” aspect of election campaigns and not enough on substance. CQ Researcher • Feb. 6, 2015 • www.cqresearcher.com Volume 25, Number 6 • Pages 121-144 RECIPIENT OF SOCIETY OF PROFESSIONAL JOURNALISTS A WARD FOR EXCELLENCE u AMERICAN BAR ASSOCIATION SILVER GAVEL A WARD I N S I D E THE I SSUES ....................123 BACKGROUND ................130 CHRONOLOGY ................131 CURRENT SITUATION ........136 AT I SSUE ........................137 OUTLOOK ......................139 BIBLIOGRAPHY ................142 THE NEXT STEP ..............143 T HIS R EPORT Published by CQ Press, an Imprint of SAGE Publications, Inc. www.cqresearcher.com

Transcript of CQR Political Polling - SAGE Publications Inc · ing using digital interviewers. OUTLOOK 139 Shades...

Political PollingDo polls accurately measure public attitudes?

Smart phones, social media and the Internet have

made it easier than ever for people to make their

views known, but the new technology can make it

harder for political pollsters to gather and measure

public opinions with precision or consistency. They face public

suspicions of partisanship, reluctance to provide candid answers

and — as cellphone use grows — difficulty reaching respondents

by the traditional method of random calls to household landlines.

meanwhile, critics charge that the news media pay too much

attention to “horse-race” polls showing who leads in political races

and not enough to candidates’ policy ideas. The 2014 elections,

in which pollsters miscalled the results of a number of closely

watched races, cast a harsh spotlight on the industry, but pollsters

contend their record has improved over the years. Some experts

see promise in the increasing use of “opt-in” polls such as those

on the Internet, but the approach is controversial.

Statistician Nate Silver predicted Barack Obamawould win the 2012 presidential election in his

FiveThirtyEight.com blog, which aggregates pollsrather than relying on a single survey. Critics say

media and pollsters put too much emphasis on the so-called “horse race” aspect of election campaigns and

not enough on substance.

CQ Researcher • Feb. 6, 2015 • www.cqresearcher.comVolume 25, Number 6 • Pages 121-144

RECIPIENT Of SOCIETY Of PROfESSIONAL JOURNALISTS AwARD fOR

EXCELLENCE u AmERICAN BAR ASSOCIATION SILvER GAvEL AwARD

I

N

S

I

D

E

THE ISSUES ....................123

BACKGROUND ................130

CHRONOLOGY ................131

CURRENT SITUATION ........136

AT ISSUE........................137

OUTLOOK ......................139

BIBLIOGRAPHY ................142

THE NEXT STEP ..............143

THISREPORT

Published by CQ Press, an Imprint of SAGE Publications, Inc. www.cqresearcher.com

122 CQ Researcher

THE ISSUES

123 • Do polls show a persistentpolitical bias?• will Twitter and facebooksupplant conventional polling?• Does the media’s avid inter-est in polls hurt democracy?

BACKGROUND

130 Early PollingThe first political poll was an1824 survey by a Harrisburg,Pa., newspaper.

133 Dewey vs. TrumanIn 1948, pollsters erroneouslypredicted that RepublicanThomas Dewey would defeatPresident Harry S. Truman.

133 Exit Pollingmedia missteps involving exitpolls occurred in 1980 and1996.

CURRENT SITUATION

136 Presidential RaceExperts say pollsters will playa prominent role in the 2016presidential election.

136 Internet PollingPollsters are debating the reliability of opt-in Internetsurveys.

138 Robot InterviewersSome pollsters are consider-ing using digital interviewers.

OUTLOOK

139 Shades of GrayNo consensus exists aboutthe future of political polling.

SIDEBARS AND GRAPHICS

124 Pollsters Finding House-holds Harder to Reachfewer homes have landlines.

125 Most Americans See Biasin PollsThree-fourths of adults saypolls are slanted.

126 It’s Not the Last Straw forIowa’s Famous PollGOP still embraces it, butcritics say it’s a platform forextremists.

128 Spending on PoliticalPolls RisingTotal spending on politicalpolling by Republican andDemocratic party committeeshas risen over the pastdecade in both presidentialand midterm elections.

131 ChronologyKey events since 1824.

132 ‘Push Polls’ Cast NegativeLight on CandidatesPolling groups say the practiceis manipulative.

137 At Issue:Can “nonprobability” pollsreliably reflect public opinion?

FOR FURTHER RESEARCH

141 For More InformationOrganizations to contact.

142 BibliographySelected sources used.

143 The Next StepAdditional articles.

143 Citing CQ ResearcherSample bibliography formats.

POLITICAL POLLING

Cover: Getty Images/Bloomberg/David Paul Morris

MANAGING EDITOR: Thomas J. [email protected]

ASSISTANT MANAGING EDITORS: maryann Haggerty, [email protected],

Kathy Koch, [email protected], Scott Rohrer, [email protected]

SENIOR CONTRIBUTING EDITOR:Thomas J. Colin

[email protected]

CONTRIBUTING WRITERS: Brian Beary, marcia Clemmitt, Sarah Glazer, Kenneth Jost,Reed Karaim, Peter Katel, Barbara mantel,

Tom Price, Jennifer weeks

SENIOR PROJECT EDITOR: Olu B. Davis

EDITORIAL ASSISTANT: Ethan mcLeod

INTERN: Sarah King

FACT CHECKERS: Eva P. Dasher, michelle Harris, Nancie majkowski

An Imprint of SAGE Publications, Inc.

VICE PRESIDENT AND EDITORIAL DIRECTOR,HIGHER EDUCATION GROUP:

michele Sordi

EXECUTIVE DIRECTOR, ONLINE LIBRARY AND REFERENCE PUBLISHING:

Todd Baldwin

Copyright © 2015 CQ Press, an Imprint of SAGE Pub-

lications, Inc. SAGE reserves all copyright and other

rights herein, unless pre vi ous ly spec i fied in writing.

No part of this publication may be reproduced

electronically or otherwise, without prior written

permission. Un au tho rized re pro duc tion or trans mis -

sion of SAGE copy right ed material is a violation of

federal law car ry ing civil fines of up to $100,000.

CQ Press is a registered trademark of Congressional

Quarterly Inc.

CQ Researcher (ISSN 1056-2036) is printed on acid-free

paper. Pub lished weekly, except: (march wk. 4) (may

wk. 4) (July wk. 1) (Aug. wks. 3, 4) (Nov. wk. 4) and

(Dec. wks. 3, 4). Published by SAGE Publications, Inc.,

2455 Teller Rd., Thousand Oaks, CA 91320. Annual full-

service subscriptions start at $1,131. for pricing, call

1-800-818-7243. To purchase a CQ Researcher report

in print or electronic format (PDf), visit www.cqpress.

com or call 866-427-7737. Single reports start at $15.

Bulk purchase discounts and electronic-rights licensing

are also available. Periodicals postage paid at Thousand

Oaks, California, and at additional mailing offices. POST -

mAS TER: Send ad dress chang es to CQ Re search er, 2600

virginia Ave., N.w., Suite 600, wash ing ton, DC 20037.

Feb. 6, 2015Volume 25, Number 6

Feb. 6, 2015 123www.cqresearcher.com

Political Polling

THE ISSUESBefore November’s elec-

tions, many politicalpollsters thought they

had the races nailed: IncumbentSen. mark warner, D-va., wasa shoo-in for re-election to theU.S. Senate, and Georgia’s con-tests for Senate and governorwould be so close that runoffelections would be needed.

Those predictions turnedout to be very wrong. warnersweated out a win by lessthan 1 percentage point, whileRepublicans prevailed by suchlarge margins in Georgia thatno runoffs were necessary.Polls in several other statesalso were off the mark.

“Boy, is that an industrythat needs some houseclean-ing,” University of virginia po-litical scientist Larry Sabatosaid the morning after theelection. 1

The November results —along with polls that failedto anticipate House majorityLeader Eric Cantor’s GOP pri-mary loss in virginia in Juneto a virtually unknown college professor— dramatically illustrated the challengesconfronting pollsters. Even as technol-ogy has made it easier than ever forpeople to give their opinions, consis-tently measuring those opinions withpinpoint precision has grown exceed-ingly difficult. with the 2016 presidentialrace beginning to unfold, the stakesare high for pollsters: Their work playsa crucial role in influencing the flowof campaign donations and determiningoverall public support for candidates.

Political polling “is in a state of tran-sition, certainly,” says michael Link,president of the American Associationfor Public Opinion Research (AAPOR),the field’s professional association.

Political strategist Patrick Ruffini, aformer technology official at the Re-publican National Committee (RNC),has a more dismal view. “If anything,”he said, “the challenge for polling rightnow is to keep it from getting anyworse than it is.” 2

Polling has become entangled inthe nation’s prevailing polarized po-litical climate, with both politiciansand the public questioning the validityof polls. At a 2011 rally in Iowa, 2008GOP vice presidential nominee SarahPalin colorfully disparaged surveysshowing that many Americans regardedher unfavorably: “Polls? . . . They’refor strippers and cross-country skiers,”she declared. 3

A variety of people andinstitutions conduct politicalpolls, often attached to par-ticular candidates and parties,and most of their surveys arekept secret. Others, whosework is made public, are af-filiated with colleges, thinktanks and news outlets. Poll-sters also help interest groupsand political donors hone theirmessages or decide to whomthey should give money.

In his 2012 re-election cam-paign, President BarackObama’s strategists created an“analytics department” thatwas five times larger than theone Obama used in 2008. Thedepartment combined statis-tics from pollsters with ma-terial from other sources, in-cluding social-media sites suchas Twitter. It used that infor-mation to run tests predictinghow people could be influ-enced by certain appeals.

Obama’s campaign alsoused polls to develop a so-phisticated understanding ofwho would vote in key states.In Ohio alone, it collectedpolling data from about 29,000

people — an enormous sample thatenabled the campaign to study Obama’sappeal across a wide demographicspectrum. 4

The news media make a big dealout of political polls. Surveys showinga president’s job-approval rating, orthe results of polls on a controversialtopic such as the Affordable Care Act,become prominent news. And pollsdraw intensive coverage in the monthsand years leading up to elections.

Election polls require assessmentsof what groups will show up to voteand in what proportion to other groups.“Election polls are a special breedamong public-opinion surveys,” saidCliff Zukin, a Rutgers University public

BY CHUCK MCCUTCHEON

AP

Pho

to/Richm

ond Times-Dispatch/

Bob

Bro

wn

Sen. Mark Warner, D-Va., debates Republican strategistEd Gillespie, left, during the 2014 Virginia Senate race.Polls predicted Warner would easily win re-election, buthe just squeaked by. In another dramatic polling miscue— also in Virginia — polls failed to anticipate House

Majority Leader Eric Cantor’s stunning primary defeat bya virtually unknown college professor. Despite theirfailures, polls are expected to play a key role in the

2016 presidential race and other contests.

124 CQ Researcher

policy professor. “They call for morejudgments — the art rather than scienceof the craft — on the part of thepollster than other types of polls.” 5

The United States isn’t the only placewhere election polling can be difficult.After September’s referendum onwhether Scotland should become in-dependent from the United Kingdom,analysts noted the final “no” vote wasby a much wider margin than pollshad predicted. 6

Telephone surveys historically havebeen the main means by which pollstersgathered public opinion about candi-dates, issues and consumer products.But pollsters of all types have had tocope with dramatic changes in tele-phone use. In 2009, the percentage ofhomes without a landline was less than25 percent; by July 2014, it exceeded40 percent. 7

when nearly all Americans had alandline, pollsters had a relatively easytime designing surveys. These days,however, a “dual-frame sample” —which combines landlines and cell-phones — provides the closest to atrue probability sample, the bedrock

principle of phone polling. Probabilitysampling is based on the concept thatfor the universe of people being sur-veyed, each member has a definedand equal likelihood of being selectedto participate in the survey.

“The polling field has been address-ing developments in technology thathave made it harder and harder toreach people,” says Tim vercellotti, di-rector of western New England Uni-versity’s Polling Institute in Springfield,mass. “It began with [the advent of]answering machines, then caller ID,then cell phones.” The latter “reallysort of cut the knees out from underthe industry,” he said, because cellusers are less willing to take calls fromunfamiliar numbers.

vercellotti and other pollsters say theindustry is doing better at incorporatingcellphones. But their increasing use hascome as public response rates to pollingquestions have plummeted, from 36 per-cent in 1997 to just 9 percent in 2012,according to the Pew Research Centeron the People & the Press. 8

In addition, computer-executed polls,dubbed “robopolls,” that rely on auto-

matic rather than person-to-person dialing,have proliferated, and federal regulationsbar most robopolls from calling cellphones, a limitation that can end upomitting voters.

Another new type of polling, Internet-based opt-in “nonprobability” polls,have begun to flourish and are drawingcontroversy. * (See “At Issue,” p. 137.)

Politicians and interest groups employsuch opt-in polls to engage followers.But the questions can be loaded. Thewebsite of virginia GOP Rep. morganGriffith, for example, recently featured an“issue survey” asking questions such as,“Do you support or oppose the Envi-ronmental Protection Agency enactingnew regulations that make it harder touse coal as an energy source, killing localjobs and driving up electricity rates?” 9

In 2008, long after House Democraticleaders refused to consider some liberalDemocrats’ proposal to impeach GOPPresident George w. Bush over thedecision to invade Iraq, then-CaliforniaDemocratic Rep. Pete Stark — a liberaland outspoken Bush critic — drewadmiration from anti-Iraq war groupswhen he asked in a survey on hisHouse website if voters backed im-peachment. 10

Griffith’s and Stark’s polls weren’tintended to be scientific. But pollsterssay scientific nonprobability polls con-ducted online increasingly are worthyof consideration because pollsters canperform statistical adjustments, knownas “weighting,” to compensate for thosewho are not Internet users.

In the old days of landline phonepolls, pollsters did not need to makesuch adjustments very often. “wheneveryone had a landline and answeredit . . . even a no-name call center could[dial] random digits and get an arguably

POLITICAL POLLING

Pollsters Finding Households Harder to ReachThe percentage of households providing usable interviews for Pew Research Center polls conducted via landline telephones fell from 36 percent to 9 percent between 1997 and 2012. Success rates for contacting households in which an adult was reached also dropped, from 90 percent to 62 percent.

Source: “Assessing the Representativeness of Public Opinion Surveys,” Pew Research Center, May 15, 2012, http://tinyurl.com/7e8gjrr

Households Responding to Pew Research Surveys

Households contacted

Households providing usable interview

(Percentage responding)

0

20

40

60

80

100%

201220092006200320001997

* Nonprobability polls are those in which the par-ticipants choose to participate themselves or arechosen for their particular demographic attributes,rather than just being randomly selected. Suchpolls include so-called “opt-in” polls in which peoplevolunteer — generally online — to take part.

Feb. 6, 2015 125www.cqresearcher.com

representative sample of adults with areasonable response rate,” says markBlumenthal, The Huffington Post’s seniorpolling editor. “media and academic or-ganizations could do even better. verylittle weighting was required.”

weighting has been shown to im-prove the quality of polls. Pew con-cluded in a 2012 study that telephonesurveys that include landlines and properweighting “continue to provide accuratedata on most political, social and eco-nomic measures.” 11 But if weightingassumptions prove wrong in eitherphone or Internet polls, they can skewaccuracy. Some experts say excessiveand improper weighting plagued toomany 2014 election surveys.

“The problem, from my point ofview, is that there’s just so much weight-ing going on,” Stuart Rothenberg, apolitical forecaster and columnist, saidat a November forum on polling.

Scientific polls also have become moreexpensive. Statistician Nate Silver, founderof the popular FiveThirtyEight.com blogabout polls, said the cost of a top-qualitypoll “can run well into five figures, andit has increased as response rates havedeclined.” 12

Pollsters maintain that their mistakeshave been exaggerated and that pollingaccuracy has demonstrably improved.Eric mcGhee, a research fellow at thePublic Policy Institute of California, anonpartisan think tank, concluded inOctober 2014 that polling in Senatecontests has become so much moreaccurate since 1990 that “looked at acertain way,” today could be considered“a golden age of polls.” 13

Pollsters say their surveys get unfairlyblamed when candidates lose or are badlytrailing their opponent. “People expectpolls to do too much,” says Democraticpollster Anna Greenberg. “Polls can’t pre-dict turnout, or measure the effectivenessof a field operation or campaign.”

As pollsters, academics, the newsmedia and others debate politicalpolling, here are some of the questionsbeing raised:

Do polls show a persistent politi-cal bias?

for years, politicians and activistsacross the political spectrum have ar-gued that too many polls are slantedin favor of the opposition. A 2013survey by the London-based researchfirm Kantar found three-quarters ofAmericans say most polls are “biasedtoward a particular point of view.” 14

(See graphic, above.)Before the 2012 presidential election,

some Republicans alleged that pollswere stacked against them. GOP blog-ger Dean Chambers started a website,UnskewedPolls.com, to remove whathe charged was the deliberate inclusionof too many Democrats in a projectionof the voting population. 15 In Sep-

tember, the site’s polls put Republicanmitt Romney up by more than 7 per-centage points at a time when regularpolls showed Obama ahead by severalpercentage points. 16 Romney eventu-ally lost by nearly 4.

In 2004, Democrats were the onescomplaining about biased polls. whena Gallup poll showed Democratic pres-idential contender John Kerry trailing Pres-ident Bush, the liberal group moveOn.orgtook out a full-page ad in The New YorkTimes blasting what it called Gallup’s“flawed methodology,” adding, “This ismore than a numbers game. Poll resultsprofoundly affect a campaign’s news cov-erage as well as the public’s perceptionof the candidates.” 17

Some Democrats again criticized thepolls in 2014, before Republicans madelarge gains in the congressional elec-tions. Democratic strategist Brent Bu-dowsky wrote in the Capitol Hill news-paper The Hill: “There are so manyrazor-thin Senate races that confidentpredictions of which party holds Senatecontrol are, to paraphrase a line fromJack Nicholson in “Chinatown,” windfrom a duck’s derriere.” 18

But people who study polling saythat while polls have sometimes under-estimated one party’s support over an-other’s, it’s not because of partisanship.

“There have been years, like 1980and 1994, when the polls did underes-timate the standing of Republicans,” saidSilver, who has written repeatedly aboutthe issue. “But there have been others,like 2000 and 2006, where they under-estimated the standing of Democrats.” 19

In a subsequent article, Silver ex-amined Senate polls between 1990 and2012 and found the GOP underper-formed at the ballot box by less thanone-half of 1 percent over what thepolls had showed. “Democrats mightargue that a Republican bias has beenevident in recent years — even if ithasn’t been there over the longer term,”he wrote. “But the trend is nowhereclose to statistically significant. Nor inthe past has the direction of the bias

Most Americans See Bias in PollsThree-fourths of American adults surveyed recently said public opinion polls are biased, with more than half saying they are slanted “in some way.”

* Totals do not add to 100 percent due to rounding. Survey based on telephone interviews with 562 adults.

Source: “Kantar’s ‘Path to Public Opinion’ Poll,” Kantar Group, Sept. 4, 2013, http://tinyurl.com/kbq8jgk

Percentage of Respondents Who Said Polls Are Biased*

Toward conser-vatives

Toward liber-als

Insome way

Not biased

Don’t know

5%

22%

57%

11%7%

126 CQ Researcher

in the previous election cycle been agood way to predict what it will looklike in the next cycle.” 20

Even if pollsters add more votersfrom one party to a sample, they saythose additions are relatively small froma statistical standpoint.

“A sample that has 2 percentage pointsmore in one survey, Democrat, and an-other state has a Republican a pointmore, this is all very much within what’scalled the error margin,” said Lee miringoff,director of the marist Institute for PublicPolling in Poughkeepsie, N.Y. 21

Some polling experts cite a phe-nomenon known as “herding,” or ad-justing one’s data to make it conformto the accepted norm in other electionpolls, especially if one’s original resultsare radically different from other polls.

“Some pollsters think, ‘It can be eas-ier to be wrong with the crowd thanit is to stick your neck out,’ ” saysStefan Hankin, a pollster for Obama’s

2008 campaign who now runs his ownpolling company.

Last August, Silver’s blog found pos-sible examples of nontraditional, opt-in polls done over the Internet adjustingtheir results to reflect what the websitedescribed as higher-quality “gold-stan-dard” polls. “None of this proves guilt,but it does raise the possibility somepollsters may be peeking at their neigh-bors’ papers,” it said. 22

Other pollsters, however, say rep-utable members of their profession arefar more interested in being accuratethan in pleasing one party or the otheror others in their industry. “we all wantto get it right,” vercellotti says. “Con-scientious survey researchers go throughthis pretty regularly when doing polls:You look at your estimates and askyourself, ‘Did I do everything the wayI’ve been trained, and is the method-ology sound?’ If the answer is yes,you’ve got to put it out there.”

Overall poll numbers can be affectedwhen candidates publicly release privatepolling data only when it makes themlook good, pollsters say, even if suchso-called “outlier” polls don’t reflect thetrue state of the race. Those polls canbe folded into political forecasts basedon an aggregation of multiple polls,skewing the overall result. “we don’trelease intentionally misleading surveys,”says Greenberg, who has done pollingfor Democrats, including New Yorkmayor Bill de Blasio, Pennsylvania Gov.Tom wolf and several congressional can-didates. “If we get an outlier we willdiscourage our candidates from releasingit when it’s probably not correct.”

Will Twitter and Facebook sup-plant conventional polling?

Indiana University sociologist fabioRojas caused a stir in polling circles in2013 when, in a Washington Post columntitled “How Twitter Can Predict an Elec-

POLITICAL POLLING

One of the nation’s best-known political polls — theAmes Straw Poll, also called the Iowa Straw Poll —will be held again this summer after surviving an

effort last year to scrap it.first held in 1979, it is a poll of Republican presidential

candidates taken every four years at an August fundraising eventin Ames a year before the presidential election. It is open onlyto Hawkeye State residents regardless of whether they are reg-istered Republicans. They cast ballots after participating candidatesaddress the audience. 1

The poll — run by the state GOP — has no effect onthe actual primary process, but campaigns and the politicalnews media have come to regard it as a critical test of awhite House aspirant’s organizational ability and grassrootssupport.

Some critics of the poll say it encourages extremist elementsof the Republican Party, and some moderate Republicans haveopted in the past not to take part in it. “Nationally, the RepublicanParty has been hijacked by extreme factions, making it difficultto appeal to the emerging voting majorities across the country,which is necessary to win the white House,” said James Strohman,an Iowa State University political science lecturer. By appealingto the party’s extremists, he said, the poll “makes it difficult for

moderate, electable candidates to prevail in the Iowa caucusand beyond.” 2

Iowa Republican Gov. Terry Branstad said last year the pollhad outlived its usefulness and called for ending it. He andsome other Republicans said they were concerned it couldjeopardize the state’s ability to hold the february 2016 politicalcaucuses, the first regular voting event of the presidentialcampaign. 3

In December, however, a majority of the Iowa GOP’s CentralCommittee favored keeping the straw poll. They asked the Re-publican National Committee (RNC) whether the poll violatesnew RNC guidelines about holding voting events prior to thecaucuses.

The RNC’s general counsel, John Ryder, said in January thepoll did not violate those guidelines and could continue aslong as it isn’t promoted as an “official event.” 4 The CentralCommittee then voted unanimously to keep the poll. 5

The poll’s defenders contend it puts Iowa on the nationalpolitical stage early and brings in money for the state party.

“The only thing that we need to scrap is the talk aboutthere not being a straw poll,” said Iowa U.S. Rep. Steve King,an influential figure in the state’s GOP presidential politics. 6

In a subsequent letter to supporters before Ryder’s decision

It’s Not the Last Straw for Iowa’s famous PollGOP still embraces it, but critics say it’s a platform for extremists.

Feb. 6, 2015 127www.cqresearcher.com

tion,” he boasted that social media even-tually would put pollsters out of work.

“we no longer passively watch ourleaders on television and register ouropinions on Election Day. modern pol-itics happens when somebody com-ments on Twitter or links to a campaignthrough facebook,” Rojas wrote. “Inour hyper-networked world, anyonecan say anything, and it can be readby millions. This new world will under-mine the polling industry.”

Along with several colleagues, Rojasconducted a study concluding that Twit-ter discussions “are an unusually goodpredictor” of the outcome of U.S. Houseelections. Drawing on an archive ofbillions of randomly sampled tweets,they extracted more than 542,000 thatmentioned a Democratic or Republicancandidate in 2010. They found a “strongcorrelation” between a candidate’s“tweet share” and the final two-partyvote, and said that share correctly pre-

dicted the winner in 404 of 435 races,a success rate of nearly 93 percent. 23

Researchers at the University ofmassachusetts-Amherst noted that innine tossup Senate races in 2012, thecandidate with the most facebook fol-lowers won all but one. 24 In 2010,meanwhile, the candidate with thegreater number of facebook followerswon in 71 percent of Senate elections,and in some cases the social-mediasite was found to be a better predictorof election results than how muchmoney candidates raised and spent. 25

However, those in the polling in-dustry say the Indiana researchers’ workcontained misguided assumptions. Po-litical forecaster Rothenberg noted thatjust about 100 of the 406 House racesin 2010 were seriously competitive.

“Since House re-election rates havebeen over 90 percent in 19 of the past23 elections, you don’t need polls ortweet counts to predict the overwhelming

majority of race outcomes,” he said ina column. “In most cases, all you needto know is incumbency (or the district’spolitical bent) and the candidates’ partiesto predict who will win.” 26

Steve Koczela, president of Boston’smassINC Polling Group, agreed withRothenberg. “Social media changes toofast to give me much confidence thatmodels which worked in 2010 willwork in 2016 or 2020,” he said. 27

facebook has had some major electionfailures. In 2010, Republicans ChristineO’Donnell, Sharron Angle and meg whit-man amassed many more online followersthan their opponents did in the racesfor Delaware senator, Nevada senatorand California governor, respectively, butstill lost. And in those elections, facebookand Twitter were less accurate at pre-dicting who would win the support ofyounger voters. 28

more recently, two potential 2016white House aspirants, Democrat Hillary

was announced, King said conservatives “should be outragedthat establishment Republicans are working in secret to endthe Ames Straw Poll.” 7

Some candidates go to great lengths to try to influence thepoll. In 2011, Texas Rep. Ron Paul paid $31,000 to buy themost-desired booth space outside the convention center inwhich the vote took place. 8 He ended up narrowly losing tominnesota Rep. michele Bachmann. 9

Strongly conservative Republicans usually have prevailed inthe poll over moderates. In addition to Bachmann, poll winnerson the far right who subsequently failed to gain political tractionwere television evangelist Pat Robertson in 1987 and Texas Sen.Phil Gramm in 1995.

A poor showing, on the other hand, has led some candidatesto abandon their races. former Tennessee Gov. Lamar Alexanderdropped out after a fifth-place finish in 1999, while ex-minnesotaGov. Tim Pawlenty withdrew after taking third in 2011.

The poll involves only a minority of voters: about 17,000Iowans voted in the straw poll in 2011, compared to the roughly120,000 who took part in the Iowa caucuses. Republican mittRomney — the party’s eventual nominee — chose not to par-ticipate in the straw poll, believing he had more of a chanceto have an impact elsewhere.

“I suspect many candidates in 2016 will make the choiceRomney made in 2012,” said Phil musser, a Republican politicalstrategist who served as an adviser to Pawlenty. 10

— Chuck McCutcheon

1 Aaron Blake, “The Ames Straw Poll, Explained,” The Washington Post, Aug. 8,2011, http://tinyurl.com/3ragg49.2 James Strohman, “Guest Column: Iowa’s GOP Should End Straw Poll,”IowaStateDaily.com, Sept. 18, 2014, http://tinyurl.com/kzjpbre.3 Catherine Lucey, “Branstad Seeks to End Iowa GOP Straw Poll,” TheAssociated Press, Dec. 15, 2014, http://tinyurl.com/k5y4twb.4 James Hohmann, “RNC Gives Iowa GOP a Green Light for Straw Poll,”Politico, Jan. 8, 2015, http://tinyurl.com/nfwwrdo.5 James Hohmann, “Iowa Republicans vote to Continue Ames Straw Poll,”Politico, Jan. 9, 2015, http://tinyurl.com/mf3m2v7.6 Emily Schultheis, “will the Iowa Straw Poll Survive?” National Journal,Aug. 9, 2014, http://tinyurl.com/nutumvy.7 “King: Some Republican leaders want to end Iowa Straw Poll to silencegrassroots conservatives,” Storm Lake Pilot Tribune, Dec. 29, 2014,http://tinyurl.com/kofg5g6.8 Blake, op. cit.9 “Ames Straw Poll: michele Bachmann Beats Ron Paul — By 152 votes,”Los Angeles Times, Aug. 13, 2011, http://tinyurl.com/lgg37mh.10 Lucey, op. cit.

128 CQ Researcher

Rodham Clinton and Republican TedCruz, have dominated social media. Politi-co determined in December 2014 thatover the previous three months, the for-mer secretary of State and the Texassenator together accounted for 40 percentof facebook discussions and 47 percentof mentions on Twitter among the 10leading presidential possibilities. 29

But even though Clinton has been theoverwhelming favorite among Democratsfor 2016, Cruz has trailed well behindseveral other Republican candidates inpolls of voters’ possible preferences,drawing between 4 percent and 8 per-cent in December surveys. 30

A 2013 Pew study found that reactionon Twitter to major political eventsand policy moves “often differs a greatdeal from public opinion as measuredby surveys.” Twitter reaction was mea-sured as more liberal than the public’safter a federal court ruled that a Californialaw banning same-sex marriage was

unconstitutional; but it was more con-servative than the overall public reactionto Obama’s State of the Union andsecond inaugural speeches. 31

Although polling experts say socialmedia remain too imprecise to supplantconventional polling, it could be helpfulin augmenting such polling, they say.for example, they say, pollsters couldlearn from Twitter users what pollquestions they should ask the broaderpublic.

“I think the rise of social media pre-sents an incredible opportunity for thoseof us who care about watching andmeasuring and understanding publicopinion,” The Huffington Post’s Blumen-thal says. “If you look at the way com-panies are turning to Twitter, they seeit as an early-warning radar: when theirbest customers start complaining, theycan watch it quickly. There are possiblyanalogous opportunities in political opin-ion research.”

Does the media’s avid interest inpolls hurt democracy?

Some in the political world say thenews media’s voracious appetite for pollsdistorts the political process. much of thecriticism centers on “horse race” journalism,which concentrates more on which can-didates are trending up or down thanon the issues of the campaign.

Polls are often the leading source ofinformation in horse race articles, eventhough they usually lack context, ob-servers say. “The problem is, journalistslook at the poll and think that’s thestory, rather than going out and lookingat what is really the story,” says SusanPinkus, a former Los Angeles Timespollingdirector who is now a public opinionand strategic analysis consultant.

Tom Rosenstiel, founder and directorof Columbia University’s Project for Ex-cellence in Journalism, agreed that cur-rent political journalism “has increasedthe tendency to allow polls to createa context for journalists to explain andorganize other news — becoming thelens through which reporters see andorder a more interpretative news en-vironment.” The dependence on horserace polls, he added, has “reinforcedthese tendencies, and further thinnedthe public’s understanding toward whowon and away from why.” 32

The Internet and social media, withtheir insatiable demands for new ma-terial, have made the situation worse,said Rothenberg, the election forecaster.“Stuff that’s an hour old is ancientnews,” he lamented.

After winning a 2010 re-election racethat many pollsters predicted he wouldlose, Nevada Democratic Sen. HarryReid — then majority leader — ad-monished the media for what he calledits overeagerness to report on “mis-leading” polls. Those polls “are all overthe country, they are so unfair, andyou just gobble them up — no matterwhere they came from,” Reid told re-porters. “You just run with them as ifthey are the finest piece of pastry inthe world.” 33

POLITICAL POLLING

Spending on Political Polls RisingTotal spending on political polling by Republican and Democratic party committees has risen over the past decade during both presidential (2004, 2008, 2012) and midterm election years (2006, 2010, 2014). Party committees spent almost the same amount on polling in the 2014 midterms, but in the 2012 presidential year Democrats outspent their GOP counterparts by nearly $10 million.

* Includes spending by national, state and local party commit-tees and congressional campaign committees; data exclude senatorial campaign committees

Source: Center for Responsive Politics

Spending on Political Polling,by Party, 2004-2014*

DemocratRepublicanTOTAL

(in $ millions)

0

10

20

30

40

$50

201420122010200820062004

$16.9

$9.6

$21.0$17.7

$42.7

$29.7

Feb. 6, 2015 129www.cqresearcher.com

In 2012, syndicated columnist michaelGerson, a former speechwriter for Pres-ident George w. Bush, also decried theattention given to Silver’s FiveThirtyEightblog, which aggregated and analyzednumerous polls to produce a widelyreported accurate forecast in that year’spresidential race. “The most interestingand important thing about politics isnot the measurement of opinion butthe formation of opinion,” Gerson wrote.“Public opinion is the product — theoutcome — of politics; it is not thesubstance of politics.” 34

Despite the attention paid to his site,Silver said his use of polls to estimatea candidate’s percentage-based chancesof winning can conflict with the media’shunger for greater certainty. “what’schallenging is that [FiveThirtyEight] goesagainst a lot of instincts that some jour-nalists might have, where you want toput a ribbon on something and say, ‘Thiscandidate is going to win’ or if you don’tdo that, then you say, ‘Too close to call.we have no idea. It’s 50-50,’ ” he said.“we don’t exist in a 100-0 or 50-50world. most things in the world are75-25 outcomes.” 35

Experts in polling with backgroundsin journalism say many reporters aren’ttrained in interpreting polls. “The newsmedia have long indulged themselvesin the lazy luxury of being both data-hungry and math-phobic,” said GaryLanger, a former ABC News pollingdirector who now runs his own surveyresearch company. 36

The National Council on Public Polls,an organization of leading survey re-search firms, has sought to help reportersbetter understand polls. It publishes anonline guide, “20 Questions a JournalistShould Ask About Poll Results.” Thequestions include, “How many peoplewere interviewed for the survey?” 37

many news organizations do try toavoid reporting on polls that political par-tisans have sponsored. “Be wary of pollspaid for by candidates or interest groups,”says The Associated Press Stylebook, whichmany news organizations use as a guide.

“Their release of poll results may be doneselectively and is often a campaign tacticor publicity ploy.” 38

University of wisconsin political sci-entist Charles franklin, who studiedpublicly released horse-race polls from2000 and 2002, found that polls identifiedas partisan by National Journal’s “Hot-line,” a political website, tended to skewin favor of their preferred candidate. 39

But Greenberg, the Democratic pollster,says some reporters have taken theedict too far. “I’ve tried to have con-versations with journalists who tell me,‘I can’t trust anything you say becauseyou’re a partisan pollster,’ ” she says.

Pollsters applaud media outlets foroften recognizing the importance ofthe sampling error margin, expressedas a plus-or-minus number of percent-age points.

“what is less commonly known is thatthe margin of sampling error does notapply to the spread between the twocandidates, but to the percentage point

estimates themselves,” said Rutgers’ Zukin,a former AAPOR president. As an example,he cited 2012 polls that showed Obamaahead of Romney, 47 percent to 44 percent,with an error margin of plus or minus 3percentage points. “If applied to the 3-point spread, the 3-point margin of errorwould seem to say that Obama’s leadmight be as large as 6 (3 + 3), or as littleas 0 (3 - 3),” he said. “But when correctlyapplied to the percentage point estimatesfor the candidates, Obama’s support couldbe between 50 percent and 44 percent(47 plus or minus 3), and Romney’s be-tween 41 percent and 47 percent (44 plusor minus 3). Thus the range between thecandidates could be from Obama havinga 9-point lead (50-41) to Romney havinga 3-point advantage (44-47). So, samplingerror is generally much larger than it mayseem, and is one of the major reasonswhy polls may differ, even when conductedaround the same time.” 40

Experts commonly cite a tendencyamong the media to emphasize a

At a 2011 rally in Iowa, 2008 GOP vice presidential nominee Sarah Palin colorfullydisparaged surveys showing that many Americans regarded her unfavorably:“Polls? . . . They’re for strippers and cross-country skiers,” she declared. Polling has become entangled in the nation’s prevailing polarized political climate,

with politicians from both parties and the public questioning the validity of polls.

Get

ty Im

ages

/Sco

tt O

lson

130 CQ Researcher

POLITICAL POLLING

candidate’s leads in polls even whenthe lead is within or near the statisticalmargin of error. In a 2013 researchpaper, an American University studentfound several examples from the 2012campaign, including a report on NBC“Nightly News” in October that showedRomney up by 4 percentage points.The network put onscreen, but didnot make clear verbally, that the marginof error was plus or minus 3.4 per-centage points. 41

Certain elements of the media, how-ever, have become extremely sophis-ticated in explaining polls to the public,say some pollsters. “You have a lot ofpeople who are trying to systematicallyevaluate the value of various [polling]enterprises,” says frank Newport, editorin chief at Gallup.

Newport singled out The New YorkTimes’ online “Upshot” column, whichin 2014 allowed readers to use its com-puter model to make their own statisticalforecasts in Senate races. 42 Newportalso cited The Washington Post and Blu-menthal’s site on The Huffington Post,which dissect various polls and discusswhat’s happening in the industry.

Langer says the media shouldn’t besingled out for criticism for being in-ordinately poll-hungry. He cites thepublic relations industry, “which has dis-covered that polling is the new PR.”Public relations companies say polls canhighlight the importance of a productor service and that news releases featuringpolls stand out from other releases. 43

Even horse race coverage has its de-fenders, who say it can be valuable when

done properly. Greg marx, an associateeditor at Columbia Journalism Review,said it is especially useful at the outsetof a presidential campaign’s primary elec-tion season, when parties are trying toassess their most-electable aspirants.

“If reporters wait for the voters toweigh in to take stock of who’s ahead,they’ll have missed much of the story,”he said. 44

BACKGROUNDEarly Polling

many politicians have long contend-ed that political and legislative

success springs from an ability to seeand implement the public’s wishes. “whatI want to get done is what the peopledesire to have done, and the questionfor me is how to find that out exactly,”President Abraham Lincoln said. 45

Efforts to discern what people wantedbegan decades before Lincoln took office.The first public opinion poll was an1824 Harrisburg Pennsylvanian surveythat correctly predicted Andrew Jacksonwould defeat John Quincy Adams. How-ever, because neither candidate won amajority of the electoral vote, the electionwent to the House of Representatives,which picked Adams. 46

Newspapers continued to supplementtheir election coverage by interviewingvoters as they left polling places. Bythe end of the 19th century, such “strawpolls” were a feature of local as wellas national papers and magazines. 47

Some theorize the term came from hold-ing up a stalk of straw to see whichway the wind blew it. 48

By the 20th century, more sophisti-cated methods emerged. In 1932, ad-vertising agency employee George Gallupused a new market research techniqueto predict the election of his mother-in-

Continued on p. 132

Polling pioneer George Gallup used what may have been the first scientificpolitical survey in 1932 to predict that his mother-in-law, Ola Babcock Miller,would be elected Iowa’s first female secretary of state. Three years later,

he founded the Gallup Organization. In 1936, Gallup correctly predicted thatDemocratic incumbent Franklin D. Roosevelt would beat Republican Alf Landon in the presidential election, after a magazine poll of 2 million mostly well-off

Americans erroneously predicted Landon would unseat Roosevelt.

Get

ty Im

ages

/Eric

h A

uerb

ach

Feb. 6, 2015 131www.cqresearcher.com

Chronology1800s Polls emerge tomeasure political opinions.

1824In first opinion poll, HarrisburgPennsylvanian correctly predictspresidential candidate AndrewJackson’s defeat of John QuincyAdams, but the House of Repre-sentatives elects Adams after nei-ther candidate wins majority ofelectoral vote.

1930s-1950sScientific polls become popularbut have shortcomings.

1932Advertising-agency employee GeorgeGallup predicts his mother-in-law’selection as Iowa’s first female secre-tary of state in what the companyhe later started says may have beenthe first scientific political survey.

1935Gallup founds the American Instituteof Public Opinion.

1936Literary Digest polls 2 million peo-ple but incorrectly predicts thatRepublican Alf Landon would beatDemocratic President franklin D.Roosevelt; Gallup correctly callsthe election for Roosevelt afterconducting his own, more repre-sentative poll of 50,000 people.

1941National Opinion Research Center,a private nonprofit group, is found-ed as the first noncommercialpolling agency.

1947American Association for PublicOpinion Research founded as pro-fessional pollsters’ organization.

1948with polls showing Thomas Deweyleading President Harry S. Trumanin the presidential election, pollstersstop conducting interviews weeksor months before vote and fail toforecast Truman’s victory.

1960s-1980sPolitical polls and pollstersgain higher profile.

1960Democratic presidential contenderJohn f. Kennedy relies heavily onpoll data to help him defeat Re-publican Richard m. Nixon.

1968Pollster Robert Teeter becomes ad-viser to Nixon and later to Republi-can presidential candidate GeorgeH. w. Bush.

1976Democratic pollster Patrick Caddellurges presidential candidate JimmyCarter to focus on creating an imageof trustworthiness to counter post-watergate skepticism of politicians.

1981Richard wirthlin joins Reagan ad-ministration as first semi-officialstaff pollster.

1990s-2000sMajor polling mistakes drawcriticism of the industry.

1996Three television networks incorrectlycite exit polls to project a third-placefinish for Republican Bob Dole inArizona’s presidential primary. fol-lowers of Republican presidential

candidate Patrick Buchanan are laterrevealed to have tried to influenceexit pollsters.

2000voter News Service uses exit polldata to project — before florida’spolls close — that vice President AlGore would win that state and thepresidency. The U.S. Supreme Courtultimately declares George w. Bushthe winner.

2004Gallup poll shows white Housecontender John Kerry trailing Presi-dent George w. Bush, leading theleft-wing group moveOn.org toblast Gallup’s “flawed methodology.”

2007Statistician and poll aggregator NateSilver issues predictions about theupcoming presidential election andsubsequently founds the influentialFiveThirtyEight.com blog, which ispicked up by The New York Timesbefore moving to ESPN.

2008Pre-primary polls in New Hampshireput presidential contender BarackObama ahead of Hillary RodhamClinton, but she wins easily. Expertsblame an overstatement of supportfor Obama among whites.

2012Alleging that polls are stackedagainst them, Republicans heavilycriticize Silver for concluding thatDemocrat Obama’s statistical chancesof election are greater than Republi-can rival mitt Romney’s.

2014five months after being rebuked forfailing to predict the GOP primaryloss of House majority Leader EricCantor, R-va., pollsters draw evenmore criticism for miscalling racesin several states and underestimatingthe GOP’s strength at the polls.

132 CQ Researcher

law, Ola Babcock miller, as Iowa’s firstfemale secretary of state in what thecompany he started says may have beenthe first scientific political survey. 49

Three years later, Gallup founded

the American Institute of Public Opinionin Princeton, N.J., and began writinga syndicated column, “America Speaks.”The first column was about governmentspending: 60 percent of respondentssaid it was “too great,” while 31 percent

said it was “about right” and 9 percentsaid “too little.” 50

Polls played a key role in the 1936presidential election. Literary Digestpolled 2 million Americans — mostlywell-off people who read magazines,

POLITICAL POLLING

Continued from p. 130

They might seem legitimate, but so-called push pollsaren’t actually polls — they’re tools of parties and politicalactivists willing to engage in dirty tricks. In a push poll,

members of the public respond to what they believe is a validtelephone-based survey, only to be asked leading and negativequestions about a politician. The intent is to try to influenceopinion rather than to gather information.

major polling organizations strongly condemn the practice, in-cluding the National Council on Public Polls, a coalition of thosegroups, which calls push polling “political manipulation trying tohide behind the smokescreen of a public opinion survey.” 1

A prominent example of push polling came in a 2013 specialU.S. House election in which South Carolina Republican markSanford — a former governor whose career had been derailedby revelations of an extramarital affair — was running againstDemocrat Elizabeth Colbert Busch, a sister of television humoristStephen Colbert. A group supporting Sanford, called SSI Polling,reportedly conducted a poll in which questions included: “whatwould you think of Elizabeth Colbert Busch if I told you shehad an abortion?” and “what would you think of ElizabethColbert Busch if she had done jail time?” and what would youthink of Elizabeth Colbert Busch if I told you she was caughtrunning up a charge account bill?” 2

None of the accusations was true. Sanford — whose campaigndenied knowledge of the calling effort — won the electionless than a week after the media and political blogs reportedon the calls. 3

According to the liberal website ThinkProgress.org, the Connecticutmarket research firm SSI said it had been involved in placing callsto South Carolina voters with similar content on behalf of anunidentified client. A company official said her version of the callscript did not contain any questions about abortion but that otherversions of the script could have been used.

South Carolina was the scene of another push-polling controversyduring the 2000 presidential campaign. Arizona Republican Sen.John mcCain and his aides complained voters received calls de-scribing mcCain as a “cheat and a liar and a fraud” and intimatingthe untrue accusation that he had fathered an illegitimate child.Karl Rove, the political strategist for mcCain’s rival, George w.Bush, was widely cited by Democrats and journalists as beingbehind the calls. Rove has adamantly denied the charge. 4

But Republicans are not the only ones who have been

accused of using pushing polls. In the 2014 race for SuffolkCounty, N.Y., comptroller, Republican candidate John Kennedyaccused the campaign of his Democratic opponent, James Gaugh-ran, of conducting a push poll that misrepresented the workKennedy’s wife, Leslie, did as his legislative aide. He said theGaughran campaign portrayed it as sheer nepotism, when infact she put in long hours and was well regarded among con-stituents. 5

Gaughran defended his campaign’s efforts, saying LeslieKennedy’s work was a legitimate campaign issue. 6 But Gaughranlost, and Republicans said it was because voters recognized theattacks as misguided. 7

many states have banned the practice. But in October, NewHampshire’s Supreme Court upheld a lower court ruling thatsaid state limits on push polls do not apply to federal candidates.The case stemmed from a 2010 claim by the state attorneygeneral’s office that Republican Rep. Charlie Bass’ campaignhad asked negative questions about Bass’ Democratic rival, AnnmcLane Kuster. 8

In the U.S. Senate, outgoing Alaska Democrat mark Begichin November introduced an unsuccessful bill to expand thenational Do Not Call Registry, which allows consumers to blocktelemarketers’ calls, to ban push polls. 9

— Chuck McCutcheon

1 Sheldon R. Gawiser and G. Evans witt, “20 Questions a Journalist ShouldAsk About Poll Results,” National Council on Public Polls, undated,http://tinyurl.com/mj52bbg.2 Scott Keyes and Adam Peck, “Dirty Tricks: mysterious Conservative GroupSending Out Push Polls In South Carolina Special Election,” ThinkProgress.org,may 2, 2013, http://tinyurl.com/br48ep6.3 “Colbert Busch ‘Abortion’ Push Poll Rocks SC-1,” FitsNews.com, may 1,2013, http://tinyurl.com/nbuqslr.4 Karl Rove, Courage and Consequence: My Life as a Conservative in theFight (2010), p. 151, http://tinyurl.com/mycxgtm.5 Rick Brand, “Suffolk’s 12th LD: Kennedy blasts ‘lying’ in Democratic message,”Newsday, Sept. 1, 2014, http://tinyurl.com/lw8udb3.6 David m. Schwartz, “Legis. John Kennedy’s hiring of wife an issue inSuffolk comptroller race,” Newsday, Oct. 13, 2014, http://tinyurl.com/mtfo4rt.7 Rick Brand, “Kennedy’s Big Boost at Home,” Newsday, Nov. 9, 2014.8 David Brooks, “State Supreme Court says push polls for federal candidatesare OK, but not for state offices,” The Telegraph (Nashua, N.H.), Oct. 16,2014, http://tinyurl.com/o3mr62z.9 “Begich Introduces Bill to Expand ‘Do Not Call Registry’ and Protect Alaskans’Privacy,” Alaska Business Monthly, Novvember 2014, http://tinyurl.com/q9f6527.

‘Push Polls’ Cast Negative Light on CandidatesPolling groups condemn the practice as manipulative.

Feb. 6, 2015 133www.cqresearcher.com

owned cars or had phones — and pre-dicted erroneously that Republican AlfLandon would beat Democratic incum-bent franklin D. Roosevelt. Gallup’s ownpoll of 50,000 people, which was morerepresentative of the overall population,said Roosevelt would win. Roosevelt’svictory helped establish Gallup as thenation’s leading polling figure. 51

“my dad thought that polls wereabsolutely vital to a democracy,” saidGeorge Gallup Jr., who followed hisfather into the business. “He felt thatpolls were extremely important becauseit removed the power from lobbyinggroups and from smoke-filled roomsand let the public into the act.” 52

By 1941, the National Opinion Re-search Center became the first noncom-mercial polling organization. A private,nonprofit group, it is now based at theUniversity of Chicago and receives fund-ing from other nonprofits as well asgovernment agencies. Six years later, theAmerican Association for Public OpinionResearch (AAPOR) was founded as theprofessional organization of pollsters. 53

Dewey Vs. Truman

The timing of polls was a majorissue in the 1948 presidential elec-

tion between President Harry S. Trumanand Republican Gov. Thomas Deweyof New York. The Roper poll, foundedby pollster Elmo Roper just after worldwar II, discontinued surveys in Sep-tember, while Gallup stopped a fewweeks before Election Day. Both pre-dicted a Dewey win, ignoring the pos-sibility that people might change theirminds as the election drew closer.

But Truman prevailed, and the photoof him holding up a Chicago Tribunemistakenly contending he had lost hasbecome an iconic image. Gallup andother pollsters publicly apologized andrefined their methods, including pollingcloser to Election Day. 54

Democrat John f. Kennedy reliedheavily on polls in winning the pres-

idency in 1960, setting the stage forsubsequent white House contenders.Pollsters eventually began acquiringreputations as savvy strategists, pro-viding advice on what issues werepopular or unpopular.

In early 1976, with the country reel-ing from the watergate scandal andthe resignation of President Richard m.Nixon, pollster Patrick Caddell becamea trusted confidante of Democratic can-didate Jimmy Carter. After Carter’s elec-tion, Caddell used his data to adviseCarter on issues such as the 1977 treatyto cede U.S. control of the PanamaCanal to Panama. 55

Two years later, Caddell imploredCarter to give a speech addressing thecountry’s “crisis of confidence” stem-ming from soaring inflation, oil pricesand other factors. In Carter’s televisedaddress on July 15, 1979, he said: “Thesolution of our energy crisis can alsohelp us to conquer the crisis of the

spirit in our country.” The speech ledRepublicans to depict Carter in the1980 campaign as a pessimist in contrastwith Republican Ronald Reagan. 56

Exit Polling

Polls caused controversy on ElectionDay that year. By then, news media

“exit polls” of voters leaving pollingplaces had become standard. But NBCNews projected Reagan’s victory, basedon such polling, nearly three hoursbefore the polls had closed on thewest Coast, fueling speculation about

whether the premature news of Rea-gan’s victory discouraged many would-be voters from casting ballots.

News organizations subsequentlyagreed not to project any voting resultsin a state until its polls had closed.Nevertheless, media mistakes involvingexit polls continued.

President Harry S. Truman gleefully holds up the iconic Chicago Tribuneheadline mistakenly announcing his 1948 re-election defeat by New York Gov.Thomas Dewey. Major pollsters had discontinued their surveys weeks beforeElection Day, leading to the paper’s mistake. After that, pollsters continued

surveying potential voters until closer to Election Day to account for those who changed their minds at the last minute.

Get

ty Im

ages

/Pop

perf

oto

134 CQ Researcher

In 1996, three television networks— CNN, CBS and ABC — cited exitpolls to project a third-place finish forRepublican Bob Dole in Arizona’s GOPpresidential primary, which pundits pre-dicted would deal a serious blow tothe Kansas senator’s campaign. Doleactually came in second and later endedup winning the GOP nomination. It

later emerged that followers of Dole’sopponent, Republican Pat Buchanan,had tried to influence exit pollsters byactively seeking them out in order toshape the speculation in favor of theircandidate. 57

meanwhile, the increasing entry ofAfrican-Americans into politics gave riseto a phenomenon known as the “Bradleyeffect” — the tendency of voters to tellpollsters that they are undecided or arelikely to vote for a black candidate, butto actually vote for a white opponent.The term came from Los Angeles mayor

Tom Bradley’s 1982 gubernatorial bid inCalifornia, in which the black Democratled in pre-election polls only to lose toRepublican George Deukmejian, who iswhite. 58

One of the biggest polling miscuesarose in the 2000 presidential election.voter News Service, the polling sta-tistics group that had worked on the

earlier Arizona primary, used exit polldata to project, before florida’s pollsclosed, that Democratic vice PresidentAl Gore would win that state, givinghim enough electoral votes to becomepresident. Six hours later, the servicedeclared Bush the winner, then latershifted again and declared the racetoo close to call. 59 The issue of whowon florida went to the Supreme Court,which ultimately declared Bush thewinner of the state, and thus the pres-idency. voter News Service was re-placed with a new outfit in 2004. 60

As washington lobbying boomed inthe 1990s and 2000s, so did the use ofpolls by partisan interest groups to supporttheir policy positions. Such groups oftencommissioned a poll and then used thefindings to publicly frame the issues, pri-vately help shape lobbying tactics orboth. The infamous “Harry and Louise”television ads in 1994, which were seenas helping turn public sentiment againstPresident Bill Clinton’s health-care reformproposal, were shaped by Republicanpolls. The Health Insurance Associationof America, an industry lobby group, ranthe ads depicting a middle-class marriedcouple worrying about rising health carecosts and bureaucracy.

“Organized interests that can affordto conduct such surveys have a legup on lobbying legislators, for the pollsgive them critical policy and politicaldata that are both precise (as to num-bers) and targeted (state or district),”University of Kansas political scientistBurdett Loomis said in a 2003 paper.“Especially when lawmakers are rela-tively indifferent on an issue, such in-formation can be powerful.” 61

During the 2008 Democratic presi-dential primaries, a new election-pollingmistake surfaced. Pre-primary polls inNew Hampshire put then-Illinois Sen.Obama ahead of New York Sen. HillaryRodham Clinton by an average marginof more than 8 percent. But Clintonprevailed easily.

Andrew Kohut, then president of thePew Research Center, blamed the dis-crepancy on the “long-standing patternof pre-election polls overstating supportfor black candidates among white vot-ers, particularly white voters who arepoor.” 62 A Stanford University researcherlater discovered a twist on the Bradleyeffect, in which white poll respondentswere more likely to say they wouldback Obama if the person interviewingthem was thought to be black. 63

The Supreme Court’s landmark Citi-zens United v. Federal Election Commis-sion decision in 2010, which abolishedmany federal restrictions on campaign

POLITICAL POLLING

Polling mistakes in Florida in the 2000 presidential election added to confusionover the outcome of the contest between Democratic Vice President Al Gore andformer Gov. George W. Bush of Texas. The polling organization Voter News

Service projected, before Florida’s polls closed, that Gore would win that state,giving him enough electoral votes to become president. Six hours later,

the service declared Bush the winner, then later shifted again and declared the race too close to call. The Supreme Court ultimately declared Bush

the winner of the state — and the presidency.

Get

ty Im

ages

/Kev

in D

iets

ch

Feb. 6, 2015 135www.cqresearcher.com

spending, introduced a new entity —so-called “super PACs,” which can makeunlimited donations as long as the moneydoes not go directly to candidates. 64

Super PACs for both parties began com-missioning their own polls, which drewmedia attention. for example, two monthsbefore Kentucky’s 2014 Senate GOP pri-mary, a super PAC supporting Sen. mitchmcConnell of Kentucky released a pollshowing that he held a nearly 40-pointlead over challenger matt Bevin. 65

In the 2012 presidential race,FiveThirtyEight blogger Silver becamea target of Republicans’ hostility. Evenwhen polls showed Romney ahead ofObama, Silver issued forecasts basedon polls showing Obama’s statisticalchances of winning remained greater.Silver ended up precisely predictingthat the president would receive 332Electoral College votes, 62 more thanhe needed to win. 66

Heading into Election Day, Romney’sprivate polls showed him ahead inflorida, Colorado and other key bat-tleground states, but he won just one— North Carolina. 67 In a subsequentpost-election study, the Republican Na-tional Committee’s Growth and Op-portunity Project — a panel of keyparty officials — said 70 percent ofthe GOP pollsters it surveyed agreedtheir party’s official polling was inferiorto the Democratic Party’s.

A significant problem was the GOP-financed pollsters’ failure to includeadequate numbers of Hispanics andyounger voters. more than one-thirdof GOP pollsters said Republican sur-veys that failed to properly model voterturnout were a “very important” factorin the inaccuracy. 68

But it wasn’t just the GOP’s internalpolls that proved wanting. Gallup’s Elec-tion Day survey showed Romney aheadof Obama, 49 percent to 48 percent.Obama won, 51 percent to 47 percent.Newport, the Gallup editor, later citedseveral problems, including shortcom-ings in the company’s model of likelyvoters and the practice of finding survey

respondents by calling only publiclylisted landlines. To accurately reflect thevoting public, Newport said, Gallupshould also have called voters with un-listed numbers. 69

Other polls by organizations unaffiliatedwith a political party have fared better.mcGhee, of the Public Policy Institute ofCalifornia, examined publicly availableSenate polls conducted one week beforeElection Day from 1990 to 2012. He foundthat both the volume of polls, and theiraccuracy, had markedly increased.

Pollsters’ mistakes have been mag-nified because they have often comein high-profile elections. In the 2013virginia governor’s race, regarded asa key barometer for 2014’s elections,most pollsters did not adequately gaugeRepublican Ken Cuccinelli’s support.Although Cuccinelli had trailed in pre-election polls by double digits, he lostto Democrat Terry mcAuliffe by fewerthan 3 percentage points. 70

Seven months later, Cantor’s primaryloss in virginia to college professorDave Brat sparked massive criticism ofpolls. Although several pre-election pollswere wrong, the one drawing the mostattention was by Republican pollsterJohn mcLaughlin that had shown Cantor,the House majority leader, up by 34 per-centage points. Cantor lost by doubledigits, and mcLaughlin blamed a varietyof factors, including higher-than-expectedvoter turnout and harsh attacks late inthe campaign on Cantor’s stance onimmigration. 71

But some experts wondered if otherfactors were at work. GOP pollster andpolitical consultant frank Luntz calledthe miscue “a mind-blowing modern-day ‘Dewey Beats Truman’ moment”and said polls do not always provideall of the information needed to gaugean election.

“without qualitative insight — talkingwith voters face to face to judge theirmood, emotion, intensity and opinion— polls can be inconsequential, andoccasionally wrong,” Luntz said. 72

High-profile misjudgments continued

in the fall. In Kansas, many pre-electionpolls showed Gov. Sam Brownback andSen. Pat Roberts in serious electoral trouble,but both Republicans won easily. ChapmanRackaway, a political scientist at fort HaysState University, blamed herding. Pollsters“want to be close to each other, so ifthey’re wrong, they’re no more wrongthan everybody else,” he said.

Brownback’s campaign manager,mark Dugan, said public polls — asopposed to his campaign’s internal sur-veys — underestimated how many Re-publicans would vote. 73

In addition to erring in the virginiaSenate and Georgia Senate and gover-nor’s races, many pollsters failed to fore-see the defeat of Democratic Sen. KayHagan in North Carolina. And evenpolls that predicted wins for GOP Senatecandidates Tom Cotton in Arkansas andKentucky’s mcConnell failed to anticipatehow decisive their victories would be.Republicans ended up with 55 seats toregain control of the Senate.

“The performance of the [pollsters’forecasting] models was strikingly sim-ilar,” wrote The New Yorker’s John Cas-sidy. “On the eve of the election, allof them indicated that it was unlikelythat the Republicans would get 54 seatsor more. Several of them suggestedthat the probability of this happeningwas less than 25 percent.” 74

One factor, pollsters agree, is that pollingis far easier to do during presidential-election years than in midterm-electionyears because in a presidential year, moreof the electorate is engaged and showsup to vote. That makes it easier to predicttheir behavior and obtain their responsesahead of time. “The midterm Polling Cursestruck us all” in 2014, said polling forecasterSam wang. 75

Pollsters also said inaccurate pollsdrowned out more accurate ones intheir forecasting models. The most ac-curate polls, however, did not dependsolely on aggregating poll data butalso made their own adjustments. Theyincluded The Washington Post’s ElectionLab and FiveThirtyEight.com. 76

136 CQ Researcher

POLITICAL POLLING

But pollsters pushed back againstcriticism that the election was a disaster.FiveThirtyEight.com asked 17 prominentpollsters, “In light of Tuesday’s results,did election polls do well this year atdepicting the electorate’s views?” Tenanswered yes; six said no; one said,“Some did and some didn’t.” 77

In a later statistical analysis, mcGhee,of California’s Public Policy Institute,said the 2014 polls weren’t quite ason-target as in prior years. But he said:“we are still clearly in a world of betteraccuracy.” 78

CURRENTSITUATIONPresidential Race

Pollsters are expected to play aprominent role in the 2016 pres-

idential election. Clinton, the consensusDemocratic frontrunner, in Januaryhired Joel Benenson, whose pollingfor Obama was considered a key factorbehind his two white House wins.

In her failed 2008 race for the pres-idency, Clinton had relied on Demo-cratic pollster mark Penn, whom sheultimately replaced when she fell be-hind and tried to shake up her cam-paign. Obama used Benenson as partof a team of pollsters, an approach Clin-ton is said to be interested in duplicatingif she decides to run. 79

Pollsters are at odds over whether acontroversial decision by New JerseyRepublican Gov. Chris Christie’s admin-istration last year to close a portion ofthe George washington Bridge for po-litical reasons is hurting his possiblepresidential bid. On the Republican side,Adam Geller, a pollster for Christie, madenews in December when he assertedthat the controversy — dubbed “Bridge-gate” — was not hurting the New Jersey

governor, who has yet to formally an-nounce his white House plans.

“It seems to me that in New Jersey,people are over the Bridgegate scandal,and nationally, it is not even on thepublic’s radar,” Geller said in a speech.“I’m not speaking to whatever thefuture may hold.” 80 However, a Quin-nipiac University poll in January showedChristie with a rating of 46 percentapproval/48 percent disapproval amongNew Jersey voters. “It’s his worst scorein almost four years, so Bridgegatecontinues to harass Gov. Christie,” saidmaurice Carroll, the Quinnipiac poll’sassistant director. 81

Another potential GOP candidate,wisconsin Gov. Scott walker, in No-vember downplayed a poll of wisconsinvoters showing that only 42 percent ofrespondents said he would make agood president. That figure was 4 per-centage points below that of wisconsinRep. Paul Ryan, the 2012 GOP vice-presidential nominee. “In the end, anypoll right now is ridiculous,” walker said.“You look over the past four or fiveelections, people who poll high at thebeginning are not the people who endup being the nominees.” 82

However, the fact-checking websitePolitifact.com looked at past early pres-idential election polls and found thatfive contenders who did well at thetime did become the nominee. It ratedwalker’s assertion “mostly false.” 83

Internet Polling

How much attention should be ac-corded to polls using nonprob-

ability sampling — such as opt-in sur-veys conducted on the Internet — isone of the biggest ongoing debates inthe polling world.

“Our future may involve methodolo-gies where your gut says, ‘This’ll work,’but the mathematical foundation of prob-ability sampling isn’t there,” westernNew England’s vercellotti says. “You’reseeing really spirited debates about that.”

One such debate occurred whenCBS News and The New York Timesannounced last year they would beginusing YouGov panels of voters whoexpress their opinions online as partof their election poll models. YouGov,a United Kingdom-based survey re-search firm, uses the online panels asan alternative to traditional telephone-based surveys. (See sidebar, p. 132.)

David Leonhardt, editor of The Times’“Upshot” site, said the new partnershipwould result in “a significantly expandedpanel. . . . You get a more accuratepicture of the horse race when youlook at it from several angles.” 84

However, pollsters who prefer phonepolling were sharply critical. Link, fromthe American Association for PoliticalOpinion Research (AAPOR), took theunusual step of issuing a public state-ment saying YouGov’s survey methodshave “little grounding in theory” andquestioning whether the news orga-nizations would adequately disclosetheir methodologies. 85

In response, Andrew Gelman — apolitical scientist and director of Co-lumbia University’s Applied StatisticsCenter who says Internet polls havemerit — wrote a sarcastic post on ablog with the headline: “President ofAmerican Association of Buggy-whipmanufacturers takes a strong standagainst internal combustion engine, ar-gues that the so-called ‘automobile’ has‘little grounding in theory’ and that ‘re-sults can vary widely based on theparticular fuel that is used.’ ” 86

Link told CQ Researcher in Januaryhe did not intend to criticize YouGovand that his statement “was not as effectiveor efficient as it should have been. withthe issues it raised, the silver lining wasthat it should generate a dialogue.”

An AAPOR task force on nonprob-ability sampling issued a report in 2013concluding that no single consensusmethod exists for conducting it. 87 ButLink says his organization’s annual con-ference, scheduled for may, will include

Continued on p. 138

no

Feb. 6, 2015 137www.cqresearcher.com

At Issue:Can “nonprobability” polls reliably reflect public opinion?yes

yesDOUGLAS RIVERSCHIEF SCIENTIST, YOUGOV; PROFESSOR OFPOLITICAL SCIENCE, STANFORD UNIVERSITY

WRITTEN FOR CQ RESEARCHER, FEBRUARY 2015

e very public opinion poll conducted during the 2014midterm elections used a nonprobability sample of vot-ers. Some pollsters claim to have probability samples

because they used random digit dialing (RDD), but they’re con-fusing a probability sample of phone numbers with a probabili-ty sample of voters. These are not the same thing.

In probability sampling, every voter has a known positiveprobability of selection. At one time, RDD did providesomething close to a probability sample. Before the era ofcellphones, answering machines and telemarketing, mosthouseholds had a single landline telephone. when it rang,they answered it and took surveys. Nowadays, people havemultiple phone numbers, screen their calls and — if reach-able — aren’t willing to take surveys. Response rates areoften under 10 percent, so that the probability that a voterends up in a poll isn’t the probability that their phone num-ber was selected.

If you don’t know the probability of selection, then it’s nota probability sample, and you should stop pretending that youhave a probability sample. we already weight our data tomatch census demographics because some hard-to-get cate-gories of voters are underrepresented in phone samples by afactor of 10 or more.

This is where so-called nonprobability methods and Internetpanels have advantages. Large panels of respondents havebeen recruited to take surveys. Instead of randomly selectingpanelists for a poll, they are chosen to be representative ofthe population. If the panel is large and diverse enough, it’sfeasible to obtain a sample with smaller skews than an RDDphone sample. This means that less weighting is needed tocorrect for skews in the sample.

The theoretical basis for making inferences from a non-probability panel and for weighting an RDD phone areidentical, since both rely upon adjusting the sample for re-spondent characteristics. How well they work is an empiricalmatter.

At YouGov, we’ve used an online opt-in panel to produceestimates for nearly every Senate and governor election aswell as presidential vote by state for almost a decade. Theaverage error for Senate races has ranged from 1.8 percentin 2010 to 3.5 percent in 2008 (with 2014 being 3.2 percent).State level presidential estimates had an average error of1.3 percent in 2012 and 2.5 percent in 2008. These errorrates were about the same or slightly less than phonesamples in the same states over the same period.no

GARY LANGERFOUNDER AND PRESIDENT, LANGERRESEARCH ASSOCIATES; FORMER ABCNEWS POLLING DIRECTOR

WRITTEN FOR CQ RESEARCHER, FEBRUARY 2015

w e cannot be confident that nonprobability samplesvalidly and reliably represent broader public atti-tudes. They fall short in the two key tests of scien-

tific inquiry: theoreticism and empiricism. In both, by contrast,probability-based methods excel.

Probability-based survey research — conducted, for example,via random-sample telephone or face-to-face interviews — isbased on probability theory, which provides grounds for infer-ence to a larger population. The theory allows for randomnonresponse, a reason that probability-based samples, evenwith low response rates, are consistently accurate when mea-sured against relevant benchmarks.

Nonprobability sampling has made a comeback of late with theadvent of the Internet and social media, offering fast, large-scale,inexpensive data collection — but it lacks a theoretical basis. mostprominent are opt-in Internet panels, in which individuals sign upto click through questionnaires online in exchange for cash andgifts. There’s evidence of a cottage industry of professional respon-dents, signing up for multiple panels, perhaps under assumedidentities, to increase their prize-winning opportunities.

Above and beyond validation of identities, quality controloften is lacking. full disclosure is rare, including details ofsampling, sample weighting and respondent selection proce-dures. And such surveys often make unsupportable claims, in-cluding the assertion of a calculable margin of sampling error.

Empirically, academic studies have suggested that such surveysproduce inconsistent results in terms of accuracy, time trends andrelationships among variables. In 2010, the American Associationfor Public Opinion Research concluded, “There currently is nogenerally accepted theoretical basis from which to claim that sur-vey results using samples from nonprobability online panels areprojectable to the general population.” It advised researchers to“avoid nonprobability online panels when one of the researchobjectives is to accurately estimate population values.”

Attempted quantitative analysis of social media also raisesconcerns. These vast data sets may include, for example, multiplepostings (and repostings), created via orchestrated campaignsusing computerized “bots.” valid demographic information —even nationality — generally is unavailable. Selecting relevantposts and accurately discerning their meaning are highly chal-lenging given the use of slang, abbreviations, acronyms, ironyand sarcasm. And, again, a theory to support inference is absent.

All survey methods — probability and nonprobability alike— should be subjected to ongoing, independent evaluation.for this to be accomplished, full disclosure of relevant informa-tion on sampling, weighting and data production is essential.

138 CQ Researcher

“an entire focus” on nonprobabilitysampling, “so we can all come togetherand learn from each other and learnfrom the new techniques as well aslearn from the new methodologies.”

Similar efforts already are underway.

In November, the online survey companySurveymonkey announced it would begincollaborating with the research organi-zation westat and the Pew ResearchCenter. The groups will study the sciencebehind adjustments and weighting usedwith nonprobability Internet samplingalong with several other sampling meth-ods. “Having no singular framework fornonprobability sampling is limiting theinsights market researchers and opinionpollsters can deliver,” said mike Brick,a westat senior statistician. 88

Some pollsters say finding and re-taining online panels of voters who trulyreflect the public at large can be extremelyexpensive and time-consuming, but theyagree it merits further investigation.

Since 2003, the RAND Corp., a non-profit research organization, has usedsuch a panel that seeks to replicate the

voting population at large. The New YorkTimes in October 2014 began runningarticles based on the panel’s work. 89

The RAND panel’s final pre-electionresult in 2012 was closer to the election’sactual outcome than that of many otherpolls. 90

Robot Interviewers

many in the polling industry arewatching software giant microsoft

Corp., which drew attention in Sep-tember when it started an online surveywebsite called microsoft Prediction Lab,in which users can submit predictionson politics and other subjects.

The company said Cortana, the digitallybased smartphone assistant similar toApple’s Siri, eventually could conductthe interviews as an alternative to humanquestioners used in phone polling.

Scott Keeter, the Pew Research Cen-ter’s director of survey research, ap-plauded the company’s work. He saidreplacing a human pollster with a digitalone was not entirely unprecedented,but cautioned that it needs ample study.

“There are some robopolls out therewith the recorded voice of a real personasking the question — there have beenexperiments with avatars administeringpolls to you,” Keeter said. “But there areissues here, too. what’s the accent? Doesshe remind you of any old girlfriend oran ex-wife, and does that have an effecton your answers? for sensitive questionslike drug use, are people more or lesslikely to tell Cortana the truth? This iswhy we need experiments like this.” 91

The public’s disdain for robopollscan affect their quality. Two researcherscompared robopolls of a president’s job-approval rating with live calling and In-ternet surveys. Although they foundphone calls and online surveys produced“quite similar” results, they found thatrobopolls had far lower participationrates resulting in “a significantly higherestimate of the disapproval rate of thepresident and a significantly lower es-timate for ‘no opinion.’ ” 92

The marketing Research Association,the trade group representing the opin-ion and marketing research industries,asked the federal CommunicationsCommission (fCC) in January to con-sider relaxing federal regulations toallow survey, opinion and marketingresearch calls to cell phones.

Existing regulations that forbid theuse of automatic dialers or “robocallers”to cell phones limit cell phone polling,pollsters say. Currently, only human be-ings can call cell phones. The associationcontended in comments to the fCCthat the rules “unnecessarily and perhapsunintentionally limit researchers’ abilityto gather unbiased, reliable and accuratepublic views that cost-effectively rep-resent all demographics.” 93

Consumer groups, however, are ex-pected to continue their ardent oppo-sition to allowing robocalls on cellphones. They say such calls are an in-trusion and could open the door to un-wanted sales and marketing pitches. 94

many states regulate robocalls, pri-marily by restricting when they canoccur. Several specifically ban polling-

POLITICAL POLLING

Continued from p. 136

Pollsters are expected to play a prominent role in the 2016 presidential election.Former Secretary of State Hillary Rodham Clinton, the consensus frontrunner for the Democratic nomination — if she decides to run — in January hired

Joel Benenson, whose polling was considered a key factor in Barack Obama’s two white House wins.

Get

ty Im

ages

/Lis

a Lak

e

Feb. 6, 2015 139www.cqresearcher.com

related calls at certain times of the dayor night. 95

Bill mcInturff, partner and cofounderof Public Opinion Strategies, a surveyresearch firm in Alexandria, va., is amongthose exploring the potential of cellphone use. His firm conducted the firstcell-only national political survey in2013, and he says cell polls can helpilluminate the views of younger voters,who are avid cell phone users.

Though mcInturff says a cell-only sur-vey isn’t representative of the populationat large, he adds: “Once you understandyou’re doing it for targeted audience re-search, the kind of depth you can getthrough a mobile-designed survey isreally powerful.”

OUTLOOKShades of Gray

No consensus appears to exist aboutthe future of political polling. Poll-

sters in the field tend to be less willingto try new approaches than corporationsand other entities that rely on surveys.

“Political pollsters are not usuallythe industry leaders,” said Christinamatthews, a Republican pollster inAlexandria, va. 96

Some scholars wonder if the rise of“big data,” the technological ability toswiftly vacuum numerous details aboutpeople’s habits, will someday overtaketraditional polling. Such data can begleaned and analyzed faster than throughtelephone polls, with new features —such as tagging the data with GlobalPositioning System coordinates — greatlybroadening its reach. 97

“I foresee a world in which we’llblend surveys covering full populations. . . with continuous-time, sensor andother data on subsets of the population,”said Georgetown University ProvostRobert Groves, a social statistician. 98

for now, though, many pollsters saytelephone polling will remain dominantin the political world. “In the shortrun, there isn’t anything as good as alive call to a cell phone or landline,”Democratic pollster Greenberg says. “Idon’t see that going away.”

Terry Casey, a political science pro-fessor at Rose-Hulman Institute of Tech-nology in Terre Haute, Ind., agreeswith Greenberg. “I would be surprisedif what we do in terms of polling isradically different in 10 years,” Caseysays. “more often than not, [pollsters]do get the results right.”The Times’ Leonhardt said he foresees

a greater reliance on text messaging tocell phones. “If text messaging becomesnearly universal and occurs mostlythrough peoples’ mobile phone, surveyorscould use random-digit texting as a wayto poll people, for instance,” he said. 99

Hankin, the former Obama pollster,says the industry needs to broaden itsability to capture public ideologies. Hisfirm, Lincoln Park Strategies, uses morelayers in its questions and samplingthan many traditional firms, enablingit to score the electorate into 16 differentdemographic clusters that can be usedto tailor messages to voters. 100

“There’s way too much one-dimensionalviewing of how opinion works,” hesays. “You’re either A or B, pro-choiceor pro-life. But most people live theirlives in shades of gray. what [pollsters]really need to be asking is, ‘what’syour ideology, and where do you placeObama or Romney on that spectrum?’when you start overlaying these things,really 5 percent of the public are peoplewho would, by definition, be an actualmoderate or an actual independent.”FiveThirtyEight’s Silver, meanwhile,

says he hopes his approach of aggre-gating a wide number of polls — in-stead of placing too much importanceon any single poll — continues togain currency.

“I hope that — probably not bynext election cycle, but by 2024 or2028 — it’s just kind of incorporated

into the DNA of political coverage andencourages people to think more aboutuncertainty,” he said. 101

whatever polling changes are pro-posed, western New England’s vercel-lotti predicts that they will provoke in-tense discussion. “The industry is filledwith really smart, determined people— feisty, argumentative people,” hesays. “what makes it interesting is tosee how we all adapt.”

Notes1 Chuck Ross, “Larry Sabato: Polling IndustryNeeds Some ‘Housecleaning,’ ” The Daily Caller,Nov. 5, 2014, http://tinyurl.com/n2gnal7.2 Emily Cadei, “Political strategist Patrick Ruffiniforecasts the GOP’s future,” USA Today, Oct. 23,2014, http://tinyurl.com/mexkzo4.3 Jake Gibson, “Palin: Polls are for Strippersand Cross Country Skiers,” fox News.com,Sept. 3, 2011, http://tinyurl.com/pzow7j9.4 michael Scherer, “How Obama’s Data Crunch-ers Helped Him win,” CNN.com, Nov. 8, 2012,http://tinyurl.com/lq8y4hp.5 Cliff Zukin, “Sources of variation in Pre-Election Polls: A Primer,” American Associationfor Public Opinion Research website, Oct. 24,2012, http://tinyurl.com/mxxtdrb.6 Carl Bialik and Harry Enten, “why PollstersThink They Underestimated ‘No’ in Scotland,”FiveThirtyEight.com, Sept. 19, 2014, http://tinyurl.com/mmqlftj.7 Robert Channick, “40% of homes now withouta landline,” Chicago Tribune, July 8, 2014,http://tinyurl.com/mx5fhrg.8 “Assessing the Representativeness of PublicOpinion Surveys,” Pew Research Center for thePeople & the Press, may 15, 2012, http://tinyurl.com/7e8gjrr.9 “2015 Congressional Issues Survey from Con-gressman morgan Griffith,” Office of Rep. mor-gan Griffith, http://tinyurl.com/lfke2jb.10 “Congressman Pete Stark’s Online Survey:Should Congress Impeach Bush?” warIsACrime.org, http://warisacrime.org/node/30892.11 Pew Research Center, op. cit.12 Nate Silver, “Is the Polling Industry in StasisOr In Crisis?” FiveThirtyEight.com, Aug. 25,2014, http://tinyurl.com/lrssa2m.13 Eric mcGhee, why The Polls Are Better ThanEver at Predicting Senate Races,” The WashingtonPost, Oct. 28, 2014, http://tinyurl.com/o646py4.

140 CQ Researcher

POLITICAL POLLING

14 Scott Clement, “Survey Says: Polls Are Bi-ased,” The Washington Post, Sept. 4, 2013,http://tinyurl.com/mza82wa.15 Jeff Zeleny, “Poll: Obama Holds NarrowEdge Over Romney,” The New York Times,Sept. 14, 2012, http://tinyurl.com/k7wz8qn.16 Ruby Cramer, “Conservatives Embrace AlternatePolling Reality,” Buzzfeed.com, Sept. 24, 2012,http://tinyurl.com/p5amuse.17 Jason Stanford, “50 Shades of Crazy: SkewedPolls,” The Huffington Post, Sept. 27, 2012, http://tinyurl.com/ntv9hst.18 Niall Stanage, “Dems: Don’t Trust the Polls,”The Hill, Oct. 13, 2014, http://tinyurl.com/nuqwyoz.19 Nate Silver, “Poll Averages Have No Historyof Consistent Partisan Bias,” The New York Times,Sept. 29, 2012, http://tinyurl.com/pxfjebz.20 Nate Silver, “The Polls might Be SkewedAgainst Democrats — Or Republicans,”FiveThirtyEight.com, Oct. 15, 2014, http://tinyurl.com/l7ke8yj.21 “marist Institute for Public Polling DirectorLee miringoff,” HughHewitt.com, Sept. 14, 2012,http://tinyurl.com/mb4y7n2.22 Harry Enten, “Are Bad Pollsters CopyingGood Pollsters?” FiveThirtyEight.com, Aug. 11,2014, http://tinyurl.com/k4fp8bn.23 fabio Rojas, “How Twitter can predict anelection,” The Washington Post, Aug. 11, 2013,http://tinyurl.com/m5wa8vs.24 matthew macwilliams, Edward Erikson andNicole Berns, “Can facebook Predict who winsthe Senate in 2014?” Politico, Jan. 20, 2014,http://tinyurl.com/ls8kpp7.25 Jennifer Schlesinger, “Popularity Contest:Can facebook and Twitter Predict ElectionResults?” ABC News.go.com, Nov. 25, 2010,http://tinyurl.com/2eqp3bz.26 Stuart Rothenberg, “Twitter Can’t Yet PredictElections,” Roll Call, Aug. 14, 2013, http://tinyurl.com/m2387j9.27 Steve Koczela, “Twitter Doesn’t Have PollstersRunning Scared,” Commonwealth, Aug. 20,2013, http://tinyurl.com/m8jsahu.

28 Schlesinger, op. cit.29 Hadas Gold, “#Hillary #TedCruz Rule,” Politi-co, Dec. 4, 2014, http://tinyurl.com/lc6728e.30 “white House 2016: Republican Nomination,”PollingReport.com, http://tinyurl.com/m6e6dsy.31 Amy mitchell and Paul Hitlin, “Twitter Re-action to Events Often at Odds with OverallPublic Opinion,” Pew Research Center, march 4,2013, http://tinyurl.com/cvqq5hz.32 Tom Rosenstiel, “Political Polling and theNew media Culture: A Case of more BeingLess,” Public Opinion Quarterly, vol. 69, No. 5,2005, http://tinyurl.com/oppsesm.33 Adam Nagourney, “Reid Assails Polls ThatPredicted His Loss,” The New York Times,Nov. 3, 2010, http://tinyurl.com/24v8utd.34 michael Gerson, “The Trouble with Obama’sSilver Lining,” The Washington Post, Nov. 5,2012, http://tinyurl.com/c4dz38j.35 Gautam Hathi, “fiveThirtyEight founder NateSilver tackles election predictions, website goals,”The Chronicle (Duke University), Nov. 24, 2014,http://tinyurl.com/ozjrf2t.36 “Tracking Polls,” On the media, march 26,2010, http://tinyurl.com/no28du2.37 Sheldon R. Gawiser and G. Evans witt, “20Questions a Journalist Should Ask About Polls”(Third Edition), undated, National Council onPublic Polls website, http://tinyurl.com/24a246.38 Norm Goldstein (ed.), The Associated PressStylebook (2013), p. 214.39 mark Blumenthal, “what makes a Poll Par-tisan?” National Journal, march 22, 2010, http://tinyurl.com/o2dzvnn.40 Zukin, op. cit.41 Alex Kaplan, “media Coverage of the Pres-idential Horse Race and of Presidential Polls,”American University research paper, Spring 2013,http://tinyurl.com/kdpr449.42 Josh Katz, wilson Andrews and Jeremy Bow-ers, “Elections 2014: make Your Own Senateforecast,” The New York Times, http://tinyurl.com/mkslt86.43 “Creating a Buzz with web-Powered Public

Relations,” visible Strategies Communications,http://tinyurl.com/m32ve8m.44 Greg marx, “In Defense of [the Right Kindof] Horse Race Journalism,” Columbia JournalismReview, Sept. 6, 2011, http://tinyurl.com/nwwgc7c.45 “History of Opinion Polling,” NOw withBill moyers, June 14, 2002, http://tinyurl.com/op8s7aw.46 “23d: The 1824 Election and the ‘Corrupt Bar-gain,’ ” USHistory.org, http://tinyurl.com/6q4cmlb.47 “History of Opinion Polling,” op. cit.48 “How Did the Straw Poll Originate?” InnovateUs.net, http://tinyurl.com/mfblrbr.49 “George Gallup and the Scientific OpinionPoll,” The first measured Century, PBS.org,www.pbs.org/fmc/segments/progseg7.html.50 Sarah van Allen, “George Gallup, Twentieth-Century Pioneer,” Gallup.com, Dec. 29, 1999,http://tinyurl.com/pz8utqu.51 “five biggest political polling mistakes inU.S. history,” National Constitution Center,Oct. 2, 2012, http://tinyurl.com/m4d8xsr.52 “George Gallup and the Scientific OpinionPoll,” op. cit.53 “History of Opinion Polling,” op. cit.54 “five biggest political polling mistakes inU.S. history,” op. cit.55 Alexander Heard and michael Nelson (ed-itors), Presidential Selection (1987), p. 212,http://tinyurl.com/nwrd7gg.56 “Carter’s ‘Crisis of Confidence’ Speech,”PBS.org, http://tinyurl.com/3b5qpmj. Also seeD. Teter, “Iran between East and west,” Jan. 26,1979, Editorial Research Reports, available at CQResearcher Plus Archive.57 Bill Carter, “Networks Admit Error in ArizonaPrimary Calls,” The New York Times, feb. 29,1996, http://tinyurl.com/qhmgx3z.58 Brian mcCabe, “fiveThirtyEight Glossary,”The New York Times, http://tinyurl.com/pazjfon.59 Kate Pickert, “A Brief History of Exit Polling,”Time, Nov. 4, 2008, http://tinyurl.com/jwubfm5.60 “five biggest polling mistakes in U.S. history,”op. cit.61 Burdett Loomis, “from Hootie to Harry (andLouise): Polling and Interest Groups,” BrookingsInstitution, Summer 2003, http://tinyurl.com/mcl6qey.62 Andrew Kohut, “Getting It wrong,” The NewYork Times, Jan. 10, 2008, http://tinyurl.com/lo49xxc.63 Steven Shepard, “Researcher Examines vari-ation of the ‘Bradley Effect,’ ” National Journal,may 18, 2012, http://tinyurl.com/lfajma5.64 for background, see the following CQ Re-searchers:  Kenneth Jost, “Campaign financeDebates,” may 28, 2010, pp. 457-480; Thomas

About the Author

Chuck McCutcheon is a freelance writer in Washington,D.C. He has been a reporter and editor for CongressionalQuarterly and Newhouse News Service and is co-author ofthe 2012 and 2014 editions of The Almanac of AmericanPolitics and Dog Whistles, Walk-Backs and WashingtonHandshakes: Decoding the Jargon, Slang and Bluster ofAmerican Political Speech. He also has written books onclimate change and nuclear waste.

Feb. 6, 2015 141www.cqresearcher.com

J. Billitteri, “Campaign finance Reform,” June 13,2008, pp. 505-528; Christina L. Lyons, “NonprofitGroups and Partisan Politics,” Nov. 14, 2014,pp. 961-984.65 Phillip m. Bailey, “mitch mcConnell Holdswide Lead Over matt Bevin, Super PAC SenatePoll finds,” wfPL.org, march 10, 2014, http://tinyurl.com/me7m3q8.66 Andrew Hough, “Nate Silver: Politics ‘Geek’Hailed for Barack Obama’s U.S. Election fore-cast,” The Telegraph (UK), Nov. 7, 2012, http://tinyurl.com/o4vp693.67 mark Halperin and John Heilemann, DoubleDown: Game Change 2012 (2013), pp. 456, 455.68 “Growth and Opportunity Project,” Repub-lican National Committee, p. 38, http://gopproject.gop.com.69 Scott Clement, “Gallup Explains what wentwrong in 2012,” The Washington Post, June 4,2013, http://tinyurl.com/o4a64pt.70 James Hohmann, “why Terry mcAuliffeBarely won,” Politico, Nov. 6, 2013, http://tinyurl.com/kby7mat.71 Shane Goldmacher, “Eric Cantor’s PollsterTries to Explain why His Survey Showed CantorUp By 34 Points,” National Journal, June 11,2014, http://tinyurl.com/m2tub5f.72 frank Luntz, “why Polling fails,” The NewYork Times, June 11, 2014, http://tinyurl.com/k4j95jq.73 Bryan Lowry, “why Kansas Election Pollswere Dead wrong,” The Wichita Eagle, Nov. 5,2014, http://tinyurl.com/l5pz8fa.74 John Cassidy, “why Didn’t Anyone See theGOP Tidal wave Coming?” The New Yorker,Nov. 7, 2014, http://tinyurl.com/qzgx5v6.75 Sam wang, “The Polls failed to Predict aRepublican Landslide. Here’s why,” The New Re-public, Nov. 5, 2014, http://tinyurl.com/q36bwmm.76 John Sides, “Election Lab on track to forecast35 of 36 Senate races correctly,” The WashingtonPost, Nov. 5, 2014, http://tinyurl.com/mwdzymq.77 Carl Bialik, “most Pollsters we Surveyed SayPolling Did well This Year,” FiveThirtyEight.com,Nov. 7, 2014, http://tinyurl.com/qx6ovlq.78 Eric mcGhee, “The 2014 Polls were Still ReallyGood at Predicting winners,” The WashingtonPost, Dec. 12, 2014, http://tinyurl.com/nq32g4s.79 maggie Haberman, “Hillary Clinton bringsin Robby mook, Joel Benenson for likely team,”Politico, Jan. 7, 2015, http://tinyurl.com/q9688fu.80 Elizabeth Swain, “Christie pollster Gellersays ‘people are over the Bridgegate scandal,’ ”PolitickerNJ.com, Dec. 3, 2014, http://tinyurl.com/qanj4d7.81 Kevin mcArdle, “Bridgegate is a ‘traffic night-mare that never ends’ for Christie,” NJ1015.com,Jan. 21, 2015, http://tinyurl.com/p3rdmnl.

82 Tom Kertscher, “Those who poll well earlydon’t become presidential nominee, potentialcontender Scott walker says,” PolitiFact Wisconsin,Nov. 24, 2014, http://tinyurl.com/mgrpnto.83 Ibid.84 Chris Cillizza, “The New York Times rockedthe polling world over the weekend. Here’swhy,” The Washington Post, July 31, 2014, http://tinyurl.com/q8ld2dj.85 Peyton m. Craighill and Scott Clement,“How the polling establishment smacked downThe New York Times,” The Washington Post,Aug. 4, 2014, http://tinyurl.com/msbnhcl.86 Andrew Gelman, “President of AmericanAssociation of Buggy-whip manufacturerstakes a strong stand against internal combustionengine, argues that the so-called ‘automobile’has ‘little grounding in theory’ and that ‘resultscan vary widely based on the particular fuelthat is used,” Statistical Modeling, Causal In-ference and Social Science blog, Aug. 6, 2014,http://tinyurl.com/qgk9l73.87 Drew Desilver, “Q/A: what the New YorkTimes’ polling decision means,” Pew ResearchCenter, July 28, 2014, http://tinyurl.com/lfysh5f.88 “Surveymonkey Announces Collaborationwith Leading Research Organizations to Ex-plore Online Non-Probability Sampling,” newsrelease, Nov. 24, 2014, http://tinyurl.com/okjvdgb.89 “About the RAND American Life Panel,”The New York Times, Oct. 9, 2014, http://tinyurl.com/q3pf28n.

90 Nate Silver, “which Polls fared Best (Andworst) in the 2012 Presidential Race,” The NewYork Times, Nov. 10, 2012, http://tinyurl.com/aorrju4.91 Alan Schwarz, “microsoft Begins a PushInto the Polling world,” The New York Times,Sept. 29, 2014, http://tinyurl.com/kngo5b3.92 Jan van Lohuizen and Robert wayne Samohyl,“method Effects and Robopolls,” Survey Practice,vol. 4, No. 1, 2011, http://tinyurl.com/obmegkb.93 “Excluding mR from the TCPA: mRA Asksthe fCC to Allow marketing Research Callsto Cell Phones,” marketing Research Associ-ation, Jan. 2, 2015, http://tinyurl.com/pfbfk6a.94 mitch Lipka, “If You Hate Robocalls, You’llReally Hate This Idea,” Daily Finance.com,Jan. 16, 2015, http://tinyurl.com/kkxx7f8.95 “Political Robocall Laws,” The Stratics Groupwebsite, http://tinyurl.com/nbjcjhy.96 Steven Shepard, “Sorry, wrong Number,”National Journal, July 12, 2012, p. 25.97 for background, see Tom Price, “Big Dataand Privacy,” CQ Researcher, Oct. 25. 2013,pp. 909-932.98 Robert Groves, “A future world of BlendedData,” “The Provost’s Blog,” Georgetown Uni-versity, July 30, 2014, http://tinyurl.com/q9q5z6c.99 Cillizza, op. cit.100 Tess vandenDolder, “Democratic PollsterStefan Hankin Explains the math Behind awinning Campaign,” DCInno.Streetwise.com,June 24, 2014, http://tinyurl.com/l5euny8.101 Hathi, op. cit.

FOR MORE INFORMATIONAmerican Association for Public Opinion Research, 111 Deer Lake Rd., Suite100, Deerfield, IL 60015; 847-205-2651; www.aapor.org. The main association forpublic opinion and survey-research professionals.

Annenberg Public Policy Center, 202 S. 36th St., Philadelphia, PA 19104-3806;215-898-9400; www.annenbergpublicpolicycenter.org. University of Pennsylvania-based group that conducts a wide-ranging national election survey.

FiveThirtyEight.com, 147 Columbus Ave., 4th floor, New York, NY 10023;www.fivethirtyeight.com. website founded by statistician Nate Silver that dissectspolls and examines political polling.

Gallup Organization, 901 f St., N.w., washington, DC 20004; 202-715-3030;www.gallup.com. major polling organization that focuses heavily on politics.

Pew Research Center, 1615 L St., N.w., Suite 700, washington, DC 20036; 202-419-4300; www.pewresearch.org. Nonpartisan “fact tank” providing information onissues, attitudes and trends shaping the United States.

Princeton Survey Research Associates International, 600 Alexander Rd., Suite2-2, Princeton, NJ 08540; 609-924-9204; www.psrai.com. Independent companythat does polling for a variety of clients, including the news media.

Roper Center for Public Opinion Research, 369 fairfield way, Unit 1164,Storrs, CT 06269-1164; 860-486-4440; www.ropercenter.uconn.edu. University ofConnecticut-based research organization that studies political trends.

FOR MORE INFORMATION

142 CQ Researcher

Selected Sources

BibliographyBooks

Craig, Richard, Polls, Expectations and Elections: TV NewsMaking in U.S. Presidential Campaigns, Lexington Books,2014.A San Jose State University journalism professor discusses

how polls often drown out other aspects of presidential cam-paign coverage.

Erikson, Robert S., and Kent L. Tedin, American PublicOpinion (9th edition), Pearson, 2014.The textbook examines the role of public opinion in the

electoral process.

Goidel, Kirby, ed., Political Polling in the Digital Age:The Challenge of Measuring and Understanding PublicOpinion, Louisiana State University Press, 2011.An assortment of pollsters and others discuss how polling

is changing, especially with improvements in technology.

Silver, Nate, The Signal and the Noise: Why So Many Pre-dictions Fail — But Some Don’t, Penguin Press, 2012.The statistician and founder of FiveThirtyEight.com examines

the science of forecasting in politics and other fields.

Articles

Bialik, Carl, and Harry Enten, “Why Pollsters Think TheyUnderestimated ‘No’ In Scotland,” FiveThirtyEight.com,Sept. 19, 2014, http://tinyurl.com/mmqlftj.Two journalists explore how polls failed to correctly predict

the results of a referendum on Scottish independence.

Cassidy, John, “Why Didn’t Anyone See the GOP TidalWave Coming?”The New Yorker, Nov. 7, 2014, http://tinyurl.com/qzgx5v6.A journalist examines how polls overlooked significant Re-

publican wins in November’s elections.

Clement, Scott, “Gallup Explains What Went Wrong in2012,”The Washington Post, June 4, 2013, http://tinyurl.com/o4a64pt.The polling organization assesses how its polls missed fore-

casting President Obama’s 2012 re-election victory.

Connelly, Marjorie, “Push Polls, Defined,” The New YorkTimes, June 18, 2014, http://tinyurl.com/kwkeegz.A reporter explains push polls and how they differ from

legitimate surveys about politicians’ negative advertising.

Enten, Harry, “Are Bad Pollsters Copying Good Pollsters?”FiveThirtyEight.com, Aug. 11, 2014, http://tinyurl.com/k4fp8bn.A journalist looks at whether nontraditional polls, such as

those online, adjust their results to track the findings ofhigher-quality polls.

Gelman, Andrew, and David Rothschild, “Modern PollingNeeds Innovation, Not Traditionalism,” The WashingtonPost, Aug. 4, 2014, http://tinyurl.com/n2aqp3d.Two polling experts discuss the controversy over The New

York Times’ move to online polling.

Sabato, Larry J., “Skewed: Why Americans Hate the Polls,”Politico, Oct. 6, 2014, http://tinyurl.com/nojbwfa.The director of the University of virginia’s Center for Politics

discusses problems in political polling.

Shepard, Steven, “Why The GOP Still Doesn’t Have ItsAct Together on Polling,”National Journal, Sept. 12, 2013,http://tinyurl.com/kwgggmc.A journalist chronicles the Republican Party’s post-2012

efforts to improve its polling.

Reports and Studies

“Assessing the Representativeness of Public Opinion Sur-veys,” Pew Research Center for the People & the Press,May 15, 2012, http://tinyurl.com/7e8gjrr.A research group concludes that telephone polls that include

landline and cell users can be accurate.

Barber, Michael J., et al., “Online Polls and Registration-Based Sampling: A New Method for Pre-Election Polling,”Political Analysis, Summer 2014, http://tinyurl.com/omj4ch4.four political scientists outline a new survey method that

produces a representative sample of the electorate.

DiGrazia, Joseph, et al., “More Tweets, More Votes: SocialMedia as a Quantitative Indicator of Political Behavior,”Social Science Research Network, Feb. 21, 2013, http://tinyurl.com/kk795et.Indiana University researchers argue that Twitter can be an

accurate vehicle for predicting U.S. House elections.

Rosenstiel, Tom, “Political Polling and the New MediaCulture: A Case of More Being Less,”Public Opinion Quar-terly, Vol. 69, No. 5, 2005, http://tinyurl.com/oppsesm.The founder and director of the Project for Excellence in

Journalism studies how the news media’s focus on polls hasled campaigns to use polls as a political tool.

Zukin, Cliff, “Sources of Variation in Pre-Election Polls: APrimer,”American Association for Public Opinion Research,Oct. 24, 2012, http://tinyurl.com/ph8gt9u.A Rutgers University professor and former president of the

American Association for Public Opinion Research explainshow pollsters conduct their surveys.

Feb. 6, 2015 143www.cqresearcher.com

Bias

McGhee, Eric, “What if election polls are biased againstwinners?”The Washington Post, Dec. 8, 2014, http://tinyurl.com/mbbcfkr.An analysis of Senate election polls from 1990 to 2014

suggests that surveys in close races may be subtly biasedagainst the winners in those years, not a political party.

Ranganathan, Meghana, “Where Are the Real Errors inPolitical Polls?”Scientific American, Nov. 4, 2014, http://tinyurl.com/k83l2fh.Some political polls may be inherently biased due to sampling

that relies on factors of convenience, such as picking re-spondents who have phones or cars.

Silver, Nate, “Here’s Proof Some Pollsters Are PuttingA Thumb On The Scale,” FiveThirtyEight.com, Nov. 14,2014, http://tinyurl.com/mr9upsd.A study shows polls tend to “herd” results, or align them

closely with one another, which contributes to inaccurateelection predictions.

Cellphones

Blumenthal, Mark, and Ariel Edwards-Levy, “Fewer ThanEver Americans Still Use Landline Phones,”The HuffingtonPost, July 8, 2014, http://tinyurl.com/luzttf8.The rising number of wireless-only households poses prob-

lems for pollsters who are restricted from making automatedcalls to cellphones under federal law.

Fung, Brian, “Pollsters used to worry that cellphone userswould skew results. These days, not so much,” The Wash-ington Post, Nov. 3, 2014, http://tinyurl.com/mq9otgo.As decreasing landline use threatens to skew samples and

data, pollsters are minimizing the costs of conducting cellphonesurveys by eliminating calls to ineligible voters.

Thee-Brenan, Megan, “How the Rise of Cellphones AffectsSurveys,” The New York Times, July 10, 2014, http://tinyurl.com/lo87rs3.Because of declining landline use in American households,

more polling companies are contacting respondents on theircellphones to maintain an accurate demographic sample.

News Outlets

Byers, Dylan, “Fox News accused of breaking exit pollrules,”Politico, Nov. 4, 2014, http://tinyurl.com/lxpknu3.fox News broke rules set by the National Election Pool,

a consortium of media pollsters, by broadcasting exit polldata from the New Hampshire midterm election two hoursbefore the polls officially closed.

Cohn, Nate, “Explaining Online Panels and the 2014Midterms,” The New York Times, July 27, 2014,http://tinyurl.com/ldro42r.The New York Times and CBS News partnered with YouGov,

a global market research firm, to more accurately pollAmericans before the 2014 midterm elections.

McCormick, John, “Polls Misdirected Midterm Narratives,”Bloomberg News, Nov. 6, 2014, http://tinyurl.com/os9pnyc.As media outlets, universities, special interests and consulting

groups increasingly conduct their own political polls, inaccurateoutcomes and skewed data are becoming more prevalent.

Social Media

Moody, Chris, “How the GOP used Twitter to stretch electionlaws,” CNN, Nov. 17, 2014, http://tinyurl.com/lmazaub.Republican groups used anonymous Twitter accounts to

share internal polling data, a potential violation of federalcampaign finance laws.

Nuzzo, Regina, “The Future of Election Forecasting,”ScientificAmerican, Oct. 14, 2014, http://tinyurl.com/oeuk67h.Polling companies may rely on user data from social media

outlets such as Twitter and flickr, as well as microsoft’s Xboxgaming system, to survey millennials in future election cycles.

Oates, Mary, “How Google, Facebook and Tumblr aretrying to increase voter turnout,” Tech Times, Nov. 4,2014, http://tinyurl.com/q2der4x.Social media platforms such as Google, facebook and

Tumblr are encouraging users to vote through search functionsthat locate nearby polling stations.

The Next Step:Additional Articles from Current Periodicals

CITING CQ RESEARCHER

Sample formats for citing these reports in a bibliography

include the ones listed below. Preferred styles and formats

vary, so please check with your instructor or professor.

mLA STYLEJost, Kenneth. “Remembering 9/11.” CQ Researcher 2 Sept.

2011: 701-732.

APA STYLE

Jost, K. (2011, September 2). Remembering 9/11. CQ Researcher,

9, 701-732.

CHICAGO STYLE

Jost, Kenneth. “Remembering 9/11.” CQ Researcher, September

2, 2011, 701-732.

ACCESSCQ Researcher is available in print and online. for access, visit yourlibrary or www.cqresearcher.com.

STAY CURRENTfor notice of upcoming CQ Researcher reports or to learn more aboutCQ Researcher products, subscribe to the free email newsletters, CQ Re-searcher Alert! and CQ Researcher News: http://cqpress.com/newsletters.

PURCHASETo purchase a CQ Researcher report in print or electronic format(PDf), visit www.cqpress.com or call 866-427-7737. Single reports startat $15. Bulk purchase discounts and electronic-rights licensing arealso available.

SUBSCRIBEAnnual full-service CQ Researcher subscriptions—including 44 reportsa year, monthly index updates, and a bound volume—start at $1,131.Add $25 for domestic postage.

CQ Researcher Online offers a backfile from 1991 and a number oftools to simplify research. for pricing information, call 800-818-7243 or805-499-9774 or email [email protected].

Upcoming Reports

In-depth Reports on Issues in the News

?Are you writing a paper?

Need backup for a debate?

Want to become an expert on an issue?

for 90 years, students have turned to CQ Researcher for in-depth reporting on issues inthe news. Reports on a full range of political and social issues are now available. followingis a selection of recent reports:

Infectious Diseases, 2/13/15 Urban Gentrification, 2/20/15 Presidential Power, 2/27/15

Civil LibertiesReligion and Law, 11/14Abortion Debates, 3/14voting Controversies, 2/14whistleblowers, 1/14Religious Repression, 11/13

Crime/LawPolice Tactics, 12/14Campus Sexual Assault, 10/14Transnational Crime, 8/14Racial Profiling, 11/13Domestic violence, 11/13

EducationRace and Education, 9/14Paying College Athletes, 7/14Dropout Rate, 6/14School Discipline, 5/14

Environment/SocietyCentral American Gangs, 1/15Robotic warfare, 1/15Global Population Growth, 1/15Protecting the Oceans, 10/14Housing the Homeless, 10/14future of Cars, 7/14

Health/SafetyReforming veterans’ Health Care, 11/14E-Cigarettes, 9/14Teen Suicide, 9/14Understanding Autism, 7/14Regulating Toxic Chemicals, 7/14Treating Addiction, 5/14

Politics/EconomyEuropean Unrest, 1/15Nonprofit Groups and Partisan Politics, 11/14future of the GOP, 10/14Assessing the Threat from al Qaeda, 6/14