Let’s look at the facts: An investigation of psychological ... · 2008; Van Dooren 2011) and...
Transcript of Let’s look at the facts: An investigation of psychological ... · 2008; Van Dooren 2011) and...
Let’s look at the facts: An investigation of psychological biases
in policymakers’ interpretation of policy-relevant information
Julian Christensen
Let’s look at the facts: An investigation of psychological biases
in policymakers’ interpretation of policy-relevant information
PhD Dissertation
Politica
© Forlaget Politica and the author 2018
ISBN: 978-87-7335-237-3
Cover: Svend Siune
Print: Fællestrykkeriet, Aarhus University
Layout: Annette Bruun Andersen
Submitted August 31, 2018
The public defense takes place November 23, 2018
Published November 2018
Forlaget Politica
c/o Department of Political Science
Aarhus BSS, Aarhus University
Bartholins Allé 7
DK-8000 Aarhus C
Denmark
5
Table of Contents
Acknowledgements ............................................................................................ 7
Chapter 1. Introduction .................................................................................... 11
Chapter 2. Theoretical background .................................................................. 15
2.1. The ideal-typical process of evidence-based policymaking ............................ 15
2.2. The impact of prior attitudes and beliefs ....................................................... 18
2.3. Do contextual factors moderate the impact of political attitudes on
policymakers’ interpretation of policy-relevant information? ............................. 21
2.3.1. Variations in information load ............................................................... 22
2.3.2. Justification requirements ..................................................................... 23
2.4. The impact of how information is presented ................................................ 24
2.4.1. Equivalence framing effects.................................................................... 25
2.4.2. Issue framing effects .............................................................................. 26
2.4.3. Source cue effects ................................................................................... 27
2.4.4. Order effects ........................................................................................... 28
Chapter 3. Empirical strategy ........................................................................... 31
3.1. Research design ............................................................................................... 31
3.2. Empirical setting and data ............................................................................. 33
3.2.1. Don’t policymakers behave like other human beings? .......................... 33
3.2.2. Policymakers are political experts ......................................................... 34
3.2.3. Policymakers are passionate about their attitudes ................................ 35
3.2.4. Judgmental mistakes matter in policymaking ...................................... 36
3.2.5. Choice of empirical setting and data used in the dissertation ............... 37
Chapter 4. Results ............................................................................................ 41
4.1. Are policymakers biased by attitudes and beliefs? ......................................... 41
4.1.1. Goal reprioritization in light of political attitudes ................................... 41
4.1.2. Motivated numeracy in light of political attitudes ................................. 44
4.1.3. Choice-driven motivated reasoning ....................................................... 46
4.2. Do contextual factors moderate the impact of political attitudes on
policymakers’ interpretation of policy-relevant information? ............................ 48
4.2.1. Variations in information load ............................................................... 48
4.2.2. Justification requirements ...................................................................... 51
4.3. The impact of how information is presented ................................................ 53
4.3.1. Equivalence framing effects.................................................................... 53
4.3.2. Issue framing effects .............................................................................. 54
4.3.3. Source cue effects ................................................................................... 55
4.3.4. Order effects ........................................................................................... 56
Chapter 5. Concluding discussion ................................................................... 59
5.1. Don’t policymakers behave like other human beings? Revisited .................. 60
5.2. Methodological caveats .................................................................................. 62
6
5.3. What’s next? ................................................................................................... 63
5.3.1. May policymakers be manipulated in their preference formation? ....... 64
5.3.2. Does the policymaker role make it impossible to reduce motivated
reasoning? ......................................................................................................... 65
References ........................................................................................................ 69
English summary ............................................................................................. 81
Dansk resumé ................................................................................................... 83
7
Acknowledgements
People say that it’s awkward when acknowledgments are too long and cheesy.
If this is true, a slippery bridge and a broken wrist may very well have saved
us all from a very awkward situation. Thus, I am forced to write the following
with my left hand and I have to keep it short.
There is, however, a long list of people without whom I could never have
made this dissertation. I have been lucky to have Søren Serritzlew and Lene
Aarøe as supervisors throughout my years as a PhD student. Søren and Lene,
I am thankful for all the hours we have spent together. The atmosphere has
been friendly since day one and not only have you helped me develop as a re-
searcher with your incredibly valuable inputs during our many and long dis-
cussions – you have also reminded me, without hesitation, to forget about
work when other things were more important. I cannot express how thankful
I am for that.
Being part of Aarhus University’s Department of Political Science has been
(and is still) a huge privilege. The department has offered a lot of resources
and opportunities to develop professionally as well as personally and I have
always been met with kindness and flexibility from the management whenever
it was needed. Furthermore, the department is home to some vibrant and in-
spiring research communities. I always look forward to the Friday meetings in
the research section for Public Administration where I am surrounded by the
most skilled and warm colleagues imaginable. I have also learned a lot from
participating in the department’s ongoing Political Behavior workshop and
from spontaneous discussions with colleagues during lunch and at the coffee
machine. As a PhD student, I have enjoyed being part of the department’s PhD
group. Especially, I would like to thank Kristina, Niels, Louise, Jakob, Thor-
bjørn, Amalie, Trine, Rasmus, Casper, Suthan, Jonas, Mathilde, Dorte, Stefan,
Rebecca, Mathias, Aske, Christian, Mette, Søren, Frederik, Sarah, Oluf, Vilde,
Morten, Sadi, Ulrich, Alexander, and Nicolas for a lot of good conversations
and experiences. It has been a pleasure to have such a great group of people
with whom to share the ups and downs of PhD life. Finally, I would like to
highlight some heroes without whom the department could never function. I
would like to thank Birgit, Ruth, Malene, Susanne, Ida, Helle, and Lene for
helping me with all sorts of administrative tasks. I would like to thank Annette
and her brilliant team of language editors for helping me improve my written
English (in fact, Annette is the reason this dissertation is in English and not in
Danglish!). Furthermore, I would like to thank the people who wake up every
morning, many hours before me, to keep the department nice and clean. We
do not always deserve you.
8
By now, it should be clear that other people than me have been vital for
the making of this dissertation and this will be even more clear when you look
at the articles in the dissertation. Thus, I have been fortunate to work with
some really bright co-authors and I would like to thank Martin, Niels, Casper,
Asbjørn, Søren, Jens, Oliver, and Don for the many hours of good collabora-
tion. It has always been a pleasure to work with you and together, we have
made things that I could never have done on my own.
There are plenty of others who should also be thanked for their contribu-
tions to my development as a researcher over the past three years. To mention
a few, I would like to thank Steve Kelman for including me in the IPMJ PhD
student advisory board where I learned a lot about publishing in academia. I
would also like to thank the people at University of Minnesota, University of
Wisconsin-Madison, Rutgers University, University of Exeter, the Behav-
ioural Insights Team, Constructive Institute, and others who have provided
valuable feedback when I visited them to develop and present my project. I
would also like to thank the many great people that I have met at conferences
around the world.
I will finish these acknowledgments by thanking some of the people who
matter the most to me: Jonatan (Bro!), thank you for your bad (or, I mean,
good!) jokes, for making my life a bit (or, actually, a lot) more extravagant, and
for the many hours of talking, both when our lives have been full of joy and
when our mood could be better. Niels, as friends, roomies, and close col-
leagues we get to spend a lot of time together and I am grateful for that as I
enjoy every moment. I am, however, not that grateful for the broken wrist,
which was caused by the hobby you suggested ;-) Kristina, I was extremely
fortunate to share the fall of 2016 in Minnesota with you. It’s no secret that
those months were not easy for me, but because of you and Casper, the good
experiences dominate when I think back on the stay. I am happy for the many
good experiences we have had since then and the ones to look forward to in
the future.
I would also like to express my gratitude to my family in Solbjerg. You have
always supported me in pursuing my dreams and you are always there when I
need you. I cannot imagine a better family than you. Thank you also to my
family in Germany. I am grateful that we have established contact in recent
years and I look forward to many good days together in the future. And to my
family in law: I am thankful for the open arms with which you have welcomed
me. I always enjoy the time we spend together!
Finally but most importantly, Jeanette, thank you for being exactly who
you are. Until few years ago, I thought that my biggest dream was to make a
career in academia and to have an article accepted in APSR. Thank you for
showing me how silly that was and for giving me new dreams that are much
9
bigger and much more important. I am happy for all the adventures we have
already had together and I cannot wait to see what our future will bring – hav-
ing you in my life makes me the luckiest man on earth!
11
Chapter 1. Introduction
In the year 2000, United Kingdom’s then Secretary of State for Education and
Employment, David Blunkett, spoke to a group of leading academics at an
Economic and Social Research Council seminar in London. Blunkett repre-
sented a cabinet that had placed the vision of a more modern government at
the top of its agenda (Cabinet Office 1999a), and central to this agenda was the
idea that “[g]ood quality policy making depends on high quality information”
(Cabinet Office, 1999b, paragraph 7.1). In his speech, Blunkett called for more
policy decisions to be based on evidence, including evidence from the social
sciences:
Social science should be at the heart of policy making. We need a revolution in
relations between government and the social research community – we need
social scientists to help to determine what works and why, and what types of
policy initiatives are likely to be most effective (Blunkett as cited by BBC, 2000).
David Blunkett’s call for a more prominent role of evidence is illustrative of a
general surge in confidence in the idea that information is key to good policy-
making. Evidence-based policymaking has become a buzzword in democratic
systems all over the world (OECD 2017), reflecting a widespread belief that
policymakers will make better decisions if provided with factual information
in the form of e.g. policy-relevant scientific evidence and systematic policy
evaluations (Sanderson 2002; Maynard 2006; Heinrich 2007; Clarence 2002;
Davies, Nutley, and Smith 2000).1 Policy-relevant evidence should allow pol-
icymakers to engage in strategic, outcome-focused policymaking where the at-
tainment of political goals is maximized through factually informed decisions
(Nutley and Webb 2000, 20). As a result, governments have built infrastruc-
tures to ensure that policymakers have access to sound, policy-relevant infor-
mation through channels such as performance measurement (Moynihan
2008; Van Dooren 2011) and scientific advice (Doubleday and Wilsdon 2012;
1 Ironically, a year into my PhD, in the fall of 2016, Donald J. Trump was elected
president of the USA and since then, we have had to get used to the idea of “alterna-
tive facts” (Bradner 2017) and that “truth isn’t truth” (Kenny 2018). In that sense,
the role of factual information in politics has been openly and quite fundamentally
challenged (Oxford Dictionaries (2016) even named “post-truth” word of the year
2016). It is too early to evaluate the long-run consequences of this experience but
until now, the idea of evidence-based policymaking is still being promoted by schol-
ars as well as practitioners (OECD 2017).
12
Wilsdon, Allen, and Paulavets 2014), and policymakers have never had access
to more information than they do today (Walgrave and Dejaeghere 2017).
While a lot of effort has been devoted to ensuring that policymakers have
access to policy-relevant information, much less effort has been devoted to
understanding how policymakers interpret the information. This is unfortu-
nate as even the hardest facts have to be interpreted by human beings in order
to inform decision-making (Moynihan 2008). Policymakers have to make
sense of the information and translate it into decisions, and therefore we can-
not understand the role of information in policymaking without understand-
ing how policymakers interpret the information.2
The purpose of this dissertation is to improve our understanding of how
policymakers interpret information. In order to do so, I draw on important
insights from psychologically informed literature about people’s interpreta-
tion of information in general (Lupia, McCubbins, and Popkin 2000; Huddy,
Sears, and Levy 2013). Specifically, I draw on literature about voters’ interpre-
tation of political information, which has shown that psychological biases of-
ten distort how we process and learn from information. Some studies have
shown tendencies to reject information that does not support desired conclu-
sions about the world (Kunda 1990; Goren, Federico, and Kittilson 2009;
Taber and Lodge 2006; Taber, Cann, and Kucsova 2009; Groenendyk 2013;
Kahan 2016a; Cohen 2003; Bisgaard 2015; Baekgaard and Serritzlew 2016;
James and Van Ryzin 2017). Other studies have shown that people’s interpre-
tation of information is often biased by how the information is presented to
them (Tversky and Kahneman 1981; Druckman 2001; Olsen 2015; Druckman
2004; Nelson, Clawson, and Oxley 1997).
While some literature has pointed to the relevance of psychological biases
in relation to policymaking (see e.g. Bartels and Bonneau 2014; Jervis 1976;
Levy 2013; Hallsworth et al. 2018), almost no direct investigations have been
made of psychological processes among actual policymakers (for notable ex-
ceptions, see Linde and Vis (2017) and Sheffer et al. (2017)). For reasons to be
discussed in Chapter 3, we cannot take for granted that findings of psycholog-
ical biases among ordinary citizens are generalizable to real policymakers.
2 Performance management literature on decision-makers’ interpretation of deci-
sion-relevant information mainly focuses on managers’ interpretation of perfor-
mance information. For instance, some literature has shown how managers can use
comparisons with different kinds of aspiration levels, be they social, historical or po-
litical, to evaluate organizational performance (Olsen 2017; Nielsen 2014; Holm
2017; Salge 2011; Meier, Favero, and Zhu 2015). Furthermore, Moynihan (2008) has
shown how actors can make strategic interpretations of information in order to de-
fend their (often institutionally defined) interests.
13
Therefore, one of the main purposes of the dissertation is to investigate psy-
chological biases in real policymakers’ interpretations of policy-relevant infor-
mation.
If policymakers are indeed biased in their interpretation of information, it
will call into question core assumptions behind the idea that policy improve-
ments will follow when policymakers get access to factual information. If pol-
icymakers’ interpretation of information is biased, it is likely that their use of
the information will be biased as well, and an important question becomes
under what conditions the biases are most and least influential. Thus, the
overall research question of the dissertation is whether psychological biases
affect policymakers’ interpretations of policy-relevant information, and
whether contextual factors moderate the impact of psychological biases on
policymakers’ interpretation?
The rest of this summary report is structured as follows: In Chapter 2, I
present the theoretical background of my investigation of the dissertation’s
research question. I present literature about motivated reasoning and about
the influence of how information is presented, and I use this literature to for-
mulate expectations to be tested in the dissertation. In Chapter 3, I present
the overall empirical strategy for my investigation. I argue that survey experi-
ments are useful for testing the dissertation’s research question. Furthermore,
I argue that there is a need to conduct experiments on samples of actual poli-
cymakers and discuss how studies of local policymakers make large-n investi-
gations possible, even with relatively low response rates that must be expected
when surveying political elites. I end the chapter with an overview of the data
that has been collected for the purpose of my investigation. In Chapter 4, I
present experimental tests and results from the dissertation’s six articles
(listed in Table 1) that all contain evidence with relevance for the dissertation’s
overall research question. Articles A, B, C, and D ask what happens when pol-
icymakers hope to reach certain conclusions based on a given piece of infor-
mation, and articles E and F focus on the effects of how information is pre-
sented. The articles show that policymakers are indeed biased in their inter-
pretation of policy-relevant information and suggest that contextual factors
do moderate the policymakers’ tendency to engage in biased reasoning (in
ways, though, that were not expected). In Chapter 5, I conclude with a discus-
sion of the dissertations’ contributions and limitations, and I set out an agenda
for future research. Needless to say, the chapters draw heavily on content that
is also present in the dissertation’s articles.
14
Table 1: Overview of articles in dissertation
Articles Short titles
A. “How do Elected Officials Evaluate Performance? Goal Preferences,
Governance Preferences and the Process of Goal Reprioritization”. Co-
authored with Casper Dahlmann, Asbjørn Mathiasen, Donald P.
Moynihan, and Niels Bjørn Petersen. Published in Journal of Public
Administration Research and Theory 28(2), pp. 197-211
(doi.org/10.1093/jopart/muy001).
Goal Re-
prioritization
B. “The Role of Evidence in Politics: Motivated Reasoning and
Persuasion among Politicians”. Co-authored with Martin Baekgaard,
Casper Dahlmann, Asbjørn Mathiasen, and Niels Bjørn Petersen.
Forthcoming in British Journal of Political Science
(doi.org/10.1017/S0007123417000084).
Role of
Evidence
C. “‘The numbers say so…’: Do justification requirements reduce
motivated reasoning in politicians' evaluation of factual information?”
Working paper.
Justification
Requirements
D. “Biased, not blind: An experimental test of self‐serving biases in
service users’ evaluations of performance information”. Forthcoming
in Public Administration (doi.org/10.1111/padm.12520).
Biased, Not
Blind
E. “Politicians and Bureaucrats: Reassessing the Power Potential of the
Bureaucracy”. Co-authored with Jens Blom-Hansen, Martin
Baekgaard, and Søren Serritzlew. Working paper.
Politicians &
Bureaucrats
F. “Public Reporting of Multidimensional Performance: Order Effects on
Citizens’ Perceptions and Judgment”. Co-authored with Oliver James.
Working paper.
Order Effects
Note: Short titles are used throughout the rest of the summary report.
15
Chapter 2. Theoretical background
In this chapter, I present the theoretical background of, and the theoretical
expectations to be investigated in, the dissertation. As stated in the introduc-
tion, the dissertation’s focus on policymakers’ interpretation of information is
motivated by a growing confidence in the idea that information is key to good
policymaking. Before I present literature on and expectations about psycho-
logical biases, I find it appropriate to give an introduction to the ideal-typical
process of evidence-based policymaking (section 2.1). This is followed by an
overview of the theoretical background of the dissertation’s investigation. I
present literature on (and expectations about) the impact of attitudes and be-
liefs (sections 2.2 and 2.3) and on biases resulting from how information is
presented (section 2.4). As noted in the introduction, the chapter’s theoretical
expectations are tested in the dissertation’s six articles, which means that most
of the theoretical arguments are also present in the articles. Specifically, sec-
tions 2.2 and 2.3 draw on articles A-D (cf. Table 1 in the introduction), and
section 2.4 draws on articles E-F.
2.1. The ideal-typical process of evidence-based policymaking In the introduction, I noted that the idea of evidence-based policymaking has
gained prominence in recent decades. According to this idea, in ideal-typical
terms, it makes sense to view policymaking as a cyclical process as illustrated
in Figure 1 below. The cycle starts with a political goal-setting phase where the
policymakers’ job is to set a direction for society and prioritize what problems
to (attempt to) solve and what goals to pursue (Davies, Nutley, and Smith
2000, 3). Factual information can play a role in this phase, for example by
pointing to societal problems that policymakers find it important to address,
but political goal-setting is ultimately an ideological rather than a technical
exercise.
16
Figure 1: The ideal-typical process of evidence-based policymaking
Note: Figure adapted from Nutley and Webb (2000, 26) and Moynihan (2008, 6).
When policymakers have articulated their goals, they should, to use David
Blunkett’s words (cf. quote in the introduction), use policy-relevant evidence
to analyze “what works and why, and what types of policy initiatives are likely
to be most effective” in terms of attaining the goals that have been set (Blun-
kett as cited by BBC, 2000). Policymakers may draw on different kinds of ev-
idence, e.g., scientific evidence, policy evaluations, government reports, re-
ports from think tanks, and benchmark data (Nutley and Webb 2000, 23). In
practice, bureaucrats will often play a crucial role in deciding what evidence is
relevant and in communicating the evidence to the policymakers (cf. article E,
“Politicians & Bureaucrats”).
The rationale behind this use of information is that by making informed
policy-decisions based on available evidence, policymakers will be able to pur-
sue their political goals more systematically. In that sense, they will be better
off in terms of moving society in what they find to be a desired direction than
they would by relying on their potentially faulty intuitions, but of course, this
is no guarantee of goal attainment. Sometimes, there is not enough (valid) ev-
idence in relation to a given policy to make an evidence-based decision, and
even if there is ample valid evidence, the social world can be less predictable
than policymakers might hope (Tetlock 2017). Therefore, it is crucial to eval-
uate policies continuously. If policies do not work, they may need to be ad-
justed, and if they work very well, there may be room for prioritizing new
goals, which would start a new cycle of policymaking.
I call this process of evidence-based policymaking ‘ideal typical’ to
acknowledge that it is not an uncontroversial ideal. It is a rationalist model of
policymaking and some may argue that it ignores, or even hides, that politics
Political goal-setting
Analysis: What works?
Informed policy-decision
Evaluation: Did it work?
17
is inherently conflictual.3 At the same time, however, it should be noted that
while making policies based on “what works” may sound like a rather techno-
cratic exercise, there is still room for ideological fights and disagreements. I
noted above that political goalsetting is an ideological, rather than a technical,
exercise as factual information cannot decide what goals should be prioritized
in any given situation. To give an example, which I will get back to later on in
the dissertation, evidence may show that private schools are better than public
schools at ensuring high academic performance among their students, but that
public schools have fewer problems with student wellbeing. Should policy-
makers privatize more schools based on this evidence? Or should they main-
tain public schools in order to safeguard student wellbeing? The answer de-
pends on the policymakers’ prioritization between academic performance and
student wellbeing. Some policymakers may have as their top priority to ensure
high academic performance, even if it means that they have to accept more
problems with student wellbeing. Others may be more concerned about stu-
dent wellbeing and oppose policies that harm this goal, even if this means that
they will have to accept poorer academic achievements. Both views are legiti-
mate, and therefore evidence cannot (and should not) replace attitudes in pol-
icymaking. What evidence can (and should) do, according to the idea of evi-
dence-based policymaking, is to improve the policymakers’ ability to antici-
pate the consequences of their decisions and thereby help them decide what
policies to fight for in order to pursue the political goals they find important.
The increasing prominence of evidence-based policymaking can be seen
as a policy parallel to the performance management movement in public ad-
ministration (Van Dooren, Bouckaert, and Halligan 2010; Moynihan 2008;
Van de Walle and Van Dooren 2011; Gerrish 2016; Holm 2018; Hvidman and
Andersen 2014; Hatry 2006). Performance management can be used with
many purposes (Behn 2003), but a central idea is that decision-makers, in-
cluding elected officials (Askim 2011; Nielsen and Moynihan 2017; Van
Dooren, Bouckaert, and Halligan 2010), should use performance information
to engage in improved, more strategic and outcome-focused decision-making.
3 Already in the 1960s, Wildavsky (1966) criticized how “evangelical economizers”
had, by “spreading the gospel of efficiency” (Ibid., 308), succeeded in framing eco-
nomic analytical approaches to policymaking as objective and rational. He worried
that, by framing the focus on efficiency as rational, reformers did not acknowledge
broader political consequences of their policies (such as consequences for the distri-
bution of the society’s resources and power structures in the political system). There-
fore, instead of imagining that policymaking can become fully rational, it may be
better to acknowledge that politics is, and will always be, a messy process, suffused
with fights over power, protection of special interests, and ideological disagree-
ments.
18
Thus, the cyclical, evidence-based policymaking process in Figure 1 is very
similar to how decisions should be made according to the performance man-
agement doctrine as presented by Moynihan (2008).
The idea of evidence-based policymaking is intuitively appealing as it
promises a systematic way towards the attainment of political goals, but as I
argued in the introduction, we cannot fully understand the role of evidence in
policymaking without understanding how policymakers interpret infor-
mation. In order to cast light on this, I draw on psychologically informed lit-
erature on people’s interpretation of information in general, which has shown
that peoples’ interpretation is often distorted by psychological biases. Below,
I briefly introduce the theoretical background of my psychologically informed
investigation and present the expectations that are tested in the dissertation’s
articles.
2.2. The impact of prior attitudes and beliefs Four of the dissertation’s articles (articles A-D, cf. Table 1 in Chapter 1) inves-
tigate how information is interpreted when policymakers know in advance
what they hope to conclude based on the information due to prior attitudes
and beliefs. These articles draw on the theory of motivated reasoning, which
has its roots in social psychological literature and is now among the most
prominent theories in political psychology.
The theory of motivated reasoning is based on the premise that when hu-
man beings reason about information, their reasoning will always be moti-
vated by goals, defined as any “wish, desire, or preference that concerns the
outcome of a given reasoning task” (Kunda 1990, 480). Reasoning goals vary
from individual to individual and, for any given individual, from situation to
situation, and the goals affect how people approach and interpret information.
Sometimes, people are motivated to invest cognitive effort in careful pro-
cessing of a great deal of information in order to make correct judgments
about issues at hand (whatever these correct judgments may be). When this is
the case, people tend to be open-minded and nuanced in their interpretation
of the information, and they tend to be reluctant to commit to premature and
definite conclusions (Kruglanski and Webster 1996, 264). Motivated reason-
ing scholars use the term “accuracy goals” to describe the reasoning goals be-
hind such open-minded reasoning (Kunda 1990, 480). In other situations,
people are motivated to preserve, or “freeze” on predefined judgments, mean-
ing that they know in advance what conclusions they want to reach. When this
is the case, people tend to avoid new information that might challenge their
desired conclusions, and if they are exposed to new information, they tend to
19
approach this information with a closed-minded, biased defense of their de-
sired conclusions (Kruglanski and Webster 1996, 264). Motivated reasoning
scholars use the term “directional goals” to describe the reasoning goals be-
hind such biased, closed-minded reasoning (Lodge and Taber 2000, 186).
Political science literature on motivated reasoning has mainly focused on
politically motivated reasoning. In this literature, the most prominent sources
of directional goals are political identities and attitudes that people are often
motivated to defend when dealing with politically relevant facts and infor-
mation. For instance, voters are often motivated to evaluate information in
ways that are politically convenient in light of their partisan identity
(Campbell et al. 1960). Studies have shown that voters often auto-agree with
policymakers from their own political party and auto-disagree with policy-
makers from competing parties (Cohen 2003; Bolsen, Druckman, and Cook
2014; Goren, Federico, and Kittilson 2009; Slothuus and de Vreese 2010;
Leeper and Slothuus 2014; Arceneaux and Vander Wielen 2017). Similarly,
compared to supporters of opposition parties, voters whose party is in charge
of government tend to be less likely to know about government failures and to
be more forgiving of such failures if they do know about them (Tilley and
Hobolt 2011; Bisgaard 2015; Jerit and Barabas 2012; Groenendyk 2013). Fur-
thermore, when voters are exposed to information about issues that they have
political attitudes about, they will often attempt to evaluate the information in
ways that support these attitudes. Thus, people will often accept attitude-con-
gruent information uncritically and emphasize the importance of this infor-
mation and attempt to counter-argue and discount information if it challenges
their political attitudes (Lord, Ross, and Lepper 1979; Taber and Lodge 2006;
Taber, Cann, and Kucsova 2009; Druckman 2012; James and Van Ryzin
2017). Some studies have even found a tendency among voters to misinterpret
attitude-incongruent factual information (Kahan et al. 2017; Baekgaard and
Serritzlew 2016).
Based on the existing literature, I expect that elected policymakers will
tend to engage in politically motivated reasoning when evaluating policy-rel-
evant information. Policymakers will often have strong political attitudes re-
garding the issues about which they make policies (in fact, they have been
elected based on these attitudes; see also section 3.2.3), and I expect them to
be motivated to defend these attitudes.
The dissertation contains tests for two forms of politically motivated rea-
soning. First, as we argue in article A (“Goal Reprioritization”), there is reason
to expect that policymakers will tend to reweight the perceived importance of
individual pieces of information in light of the (in)congruence between the in-
formation and conclusions they are motivated to defend. In article A, we de-
20
velop the term “goal reprioritization” to describe this behavior. Goal repriori-
tization implies that, instead of evaluating information based on their political
goals, policymakers will often alter their goals in response to the information.
In other words, their ability to use evidence to pursue political goals consist-
ently will be hampered. To continue the school example from section 2.1, im-
agine a policymaker who thinks that schools should have high academic per-
formance as their top priority and is less concerned about student wellbeing.
Imagine also that, for ideological reasons, this policymaker is critical towards
contracting out the delivery of public services as she thinks that businesses
should not be able to profit from such services. If this policymaker learns that
private schools are better at ensuring high academic performance than public
schools are, but that public schools have higher student wellbeing, I expect
that, instead of accepting that private schools are better at what she finds im-
portant, the policymaker will reweight the relative importance of academic
achievements and student wellbeing. By increasing the perceived importance
of student wellbeing while lowering the perceived importance of academic
achievements, the policymaker will be able to use the evidence at hand to de-
fend her attitudes towards the role of the public and private sector in deliver-
ing public services. As I return to in Chapter 4, we use the dilemma between
academic performance and student wellbeing to test for goal reprioritization
in article A (see section 4.1.1).
In order for goal reprioritization to make sense, there needs to be access
to information regarding multiple political goals and there needs to be some
ambiguity in the information at hand. For instance, in the school example
above, there was information about academic achievements and information
about student wellbeing, and the two pieces of information had competing im-
plications. As a reasoning strategy, goal reprioritization is psychologically ap-
pealing as it allows policymakers to defend their attitudes while maintaining
an “appearance of objectivity” (Groenendyk 2013, 50). Because of the ambi-
guity in the evidence, goal reprioritization allows policymakers to make rea-
sonable (and apparently evidence-based) arguments in support of their polit-
ical attitudes. However, I do not expect that access to ambiguous information
will be necessary in order for policymakers to engage in motivated reasoning.
As mentioned, studies have shown that voters tend to misinterpret attitude-
incongruent factual information (Kahan et al. 2017; Baekgaard and Serritzlew
2016). There is therefore reason to expect that in situations where policymak-
ers face evidence that unambiguously supports one conclusion, they will tend
to misinterpret the evidence if it does not support their desired conclusion.
This expectation is tested in articles B and C (“Role of Evidence” and “Justifi-
cation Requirements”) (see Chapter 4).
21
Motivated reasoning challenges the idea that policy improvements will fol-
low when policymakers get access to policy-relevant information. If policy-
makers are biased to systematically evaluate information in ways that support
their existing political attitudes, it will hamper their ability to learn from the
information and change direction if needed. In other words, insights about
motivated reasoning question policymakers’ ability to use evidence to work
systematically towards attainment of the goals they find important.
As I noted above, political science literature on motivated reasoning has
mainly focused on politically motivated reasoning resulting from people’s de-
sire to defend their political identities and attitudes. However, information
does not have to be ideologically laden to trigger directional goals. In fact,
some of the first studies of motivated reasoning examined sports fans and
their tendency to make biased judgments in defense of their favorite teams
(Hastorf and Cantril 1954), and public administration studies have shown
tendencies among managers and employees to make self-serving assessments
of organizational performance (Meier and O’Toole 2013; Petersen, Laumann,
and Jakobsen 2018). So even if policymakers do sometimes make policies
about issues they do not have strong political attitudes about, they may still be
driven by directional goals in their interpretation of policy-relevant infor-
mation. In article D (“Biased, Not Blind”), I discuss how the simple act of mak-
ing a choice can make people engage in post-decisional directionally moti-
vated reasoning about choice-related information. Questioning the desirabil-
ity of one’s choices (compared to alternative choices) is unpleasant, and the
literature has shown that when people have made choices, they often seek to
defend them (Brehm 1956; Festinger 1957; Gollwitzer 1990; Shultz, Léveillé,
and Lepper 1999; Cooper 2007). The literature on choice-driven motivated
reasoning has primarily focused on consumer choices, but I expect the insights
to apply to policymaking as well. Policymakers make many important choices
through their job (they allocate scarce resources, they reform policies etc.),
and based on the literature, there is reason to expect that policymakers will be
motivated to defend these choices when subsequently interpreting choice-re-
lated information.
2.3. Do contextual factors moderate the impact of political attitudes on policymakers’ interpretation of policy-relevant information? In the introduction, I formulated an overall research question for the disser-
tation, which consisted of two parts. The first part asked if psychological biases
affect policymakers’ interpretation of policy-relevant factual information, and
22
the expectations above concern this part of the research question. The second
part asked if contextual factors moderate the impact of psychological biases
on policymakers’ interpretation. If policymakers are biased in their interpre-
tation of information, their use of the information can also be expected to be
biased, and an important question is under what conditions the biases are
most and least influential. Articles B and C (“Role of Evidence” and “Justifica-
tion Requirements”) contribute to answering this second part of the research
question. “Contextual factors” is a very broad concept,4 but articles B and C
focus on two important context factors that, according to the literature, may
moderate the impact of attitudes on policymakers’ interpretation. In article C
(“Role of Evidence”), we investigate the effect of variations in the amount of
policy-relevant information on policymakers’ interpretation of the infor-
mation, and in article D (“Justification Requirements”), I investigate whether
policymakers alter their interpretation of information when they know that
they have to be able to justify it. Below, in sections 2.3.1 and 2.3.2, I present
the theoretical background of these two investigations in turn.
2.3.1. Variations in information load
First, while the human tendency to engage in biased reasoning in defense of
political attitudes is well established in the literature, it has also been noted
that “[p]eople do not seem to be at liberty to conclude whatever they want to
conclude merely because they want to” (Kunda 1990, 482). For instance, Leon
Festinger argued that people are generally motivated to defend their attitudes
and beliefs, but that it is difficult for people to maintain beliefs if they are
“clearly invalid” (Festinger 1957, 243). Reality constraints can force people to
revise beliefs that are “directly and unequivocally disconfirmed by good evi-
dence” (ibid.) even though this is psychologically uncomfortable. As Ziva
Kunda formulates it, people “draw the desired conclusion only if they can
muster up the evidence necessary to support it” (Kunda 1990, 483).
Redlawsk, Civettini, and Emmerson draw on these insights and find that
when people are presented with information that is slightly at odds with their
political attitudes and beliefs, the information tends to have attitude-strength-
ening effects due to directionally motivated reasoning about the information
(Redlawsk et al. 2010). However, when people are presented with information
that is highly incongruent with their attitudes and beliefs, this can force them
4 Examples of contextual factors that are not studied in this dissertation, but are cer-
tainly worthy of investigation, are (monetary) incentives to make accurate evalua-
tions of information (Bullock et al. 2015; Prior, Sood, and Khanna 2015) and varia-
tions in the degree of politicization in the information environment (Slothuus and de
Vreese 2010; James and Van Ryzin 2017).
23
to give in to the evidence. Redlawsk and his colleagues identify an affective
tipping point at which people can no longer ignore the incongruence between
their attitudes and factual information, meaning that they have to “take ‘real-
ity’ into account” (ibid., 583) and revise their attitudes instead of defending
them. They define the affective tipping point in relative terms, but in more
general terms the substantial conclusion is that people can be forced to make
changes in their attitudes and beliefs if they are confronted with overwhelming
evidence suggesting that such changes are needed. Therefore, as we argue in
article B, there is reason to expect that political attitudes will matter less to
policymakers’ interpretation of policy-relevant information when they are
presented with large amounts of information than when they are presented
with smaller amounts of information. This expectation presupposes, of
course, that the available information unambiguously supports one conclu-
sion. Otherwise, the policymakers could engage in goal reprioritization in de-
fense of their desired conclusion, cf. section 2.2.
2.3.2. Justification requirements
As mentioned, I also ask whether policymakers’ tendency to engage in direc-
tionally motivated reasoning about policy-relevant information is affected
when they know that they must be able to justify their interpretation of the
information. In democratic systems, elected policymakers are required to jus-
tify their claims regularly through a variety of institutions, and some literature
suggests that this may reduce biases in the policymakers’ reasoning.
Scholars have argued that justification requirements5 work as a “signal to
(…) take the role of the other toward their own mental processes and to give
serious weight to the possibility that their preferred answers are wrong”
(Tetlock and Kim 1987, 707). Because people are motivated to appear rational
and competent (Geen 1991; Kunda 1990; Groenendyk 2013; Klar and
Krupnikov 2016), they are expected to react to justification requirements with
“impression construction” (Leary and Kowalski 1990) by increasing the cog-
nitive effort they invest in evaluating information and by using this effort to
make less biased interpretations of the information (Chaiken 1980; Tetlock
1985; Tetlock and Kim 1987; Lerner, Goldberg, and Tetlock 1998). In other
5 Researchers in Psychology use the term “accountability” to describe the expectation
to be called on to justify one’s interpretation of information (Lerner and Tetlock
1999, 255). I refrain from using this term in order to avoid confusion with other kinds
of accountability in political science and public administration literature, such as
electoral accountability (Canes-Wrone, Brady, and Cogan 2002; Rogers 2017;
Lowry, Alt, and Ferree 1998) and performance management as a tool for accounta-
bility (Moynihan 2008).
24
words, justification requirements are expected to create a pressure for accu-
racy goals in people’s interpretation of information.
Justification requirements have proven effective in reducing a wide range
of cognitive distortions (Bodenhausen, Kramer, and Süsser 1994; Lerner,
Goldberg, and Tetlock 1998; Tetlock 1983; Webster, Richter, and Kruglanski
1996; Miller and Fagley 1991) and have been highlighted as a “simple, but sur-
prisingly effective, social check on many judgmental shortcomings” (Tetlock
1983, 291). However, effects of justification requirements have not been sys-
tematically investigated in relation to motivated reasoning. Some researchers
have speculated that policymakers may engage in less politically motivated
reasoning about policy-relevant information when they expect that they will
later be asked to justify their interpretation of the information (Bartels and
Bonneau 2014, 226). Bolsen and colleagues (2014) found debiasing effects of
a justification-inspired intervention in a motivated reasoning experiment, but
in addition to a justification requirement, their intervention consisted of an
appeal to be open-minded. We therefore do not know whether it was the jus-
tification requirement, the appeal, or the combination of the two that led to
less biased reasoning in their experiment. Thus, my investigation in article C
(see Chapter 4) is the first to systematically test for effects of justification re-
quirements on the tendency to engage in politically motivated reasoning. I ex-
pect that policymakers will engage in a more effortful search for and pro-
cessing of policy-relevant information when they are asked to justify their in-
terpretation of the information than when they are not asked to justify their
interpretation. Furthermore, I expect political attitudes to matter less to poli-
cymakers’ interpretation of policy-relevant information when they are asked
to justify their interpretation of the information.
2.4. The impact of how information is presented So far, I have presented the theoretical background of articles A-D, which all
concern the distortive impact of prior attitudes and beliefs on policymakers’
interpretation of policy-relevant information. Articles E and F (“Politicians &
Bureaucrats” and “Order Effects”) concern psychological biases resulting from
how information is presented. The literature has shown that even small, seem-
ingly arbitrary changes in the presentation of information can cause system-
atic and quite substantial changes in people’s interpretation of – and decision-
making based on – the information. As I also discuss in Chapter 5, information
providers may be in a powerful position to influence policymakers’ interpre-
tation of information through choices regarding the presentation of the infor-
mation. The dissertation contains tests of four prominent ways in which poli-
25
cymakers’ interpretation of policy-relevant information may be biased by fac-
tors related to the presentation of the information. Thus, in sections 2.4.1-
2.4.4, I present theoretical expectations about equivalence framing effects, is-
sue framing effects, source cue effects and order effects on policymakers’ in-
terpretation of information.
2.4.1. Equivalence framing effects
Equivalence framing effects are effects of “different, but logically equivalent,
words or phrases [that] (…) cause individuals to alter their preferences”
(Druckman 2001, 228). Equivalence framing effects were first discovered by
Tversky and Kahneman (1981), who developed prospect theory in the study of
citizens’ risk preferences. In their Asian Disease problem, Tversky and Kahne-
man asked citizens to choose between two programs to counteract a fictitious
exotic disease. They told respondents that 600 lives would be lost if nothing
was done to counteract the disease and both programs had an identical ex-
pected value of 200 saved lives. One program was risk-seeking, the other pro-
gram was risk-averse, and Tversky and Kahneman discovered that if the two
programs were presented in a domain of gains, focusing on the possible num-
bers of saved lives, a vast majority of the respondents preferred the risk-averse
program (ibid., 453). If the programs were presented in a domain of losses,
focusing on the possible numbers of deaths, a vast majority preferred the risk-
seeking program (ibid.).
Since Tversky and Kahneman’s work, equivalence framing effects have
been studied in other settings and in relation to other types of preferences be-
sides risk preferences (Levin, Schneider, and Gaeth 1998; Druckman and
Nelson 2003; Druckman 2001; Druckman 2004; Valentino and Nardis 2013).
Today, equivalence framing effects are recognized as “one of the most stun-
ning and influential demonstrations of irrationality” (Druckman 2004, 671).
In public administration literature, equivalence framing has gained promi-
nence since Olsen (2015) found that citizens made more positive evaluations
of a hospital if they were told that it had a satisfaction rate of 90 percent com-
pared to an equivalent scenario where the hospital was presented with a dis-
satisfaction rate of 10 percent (ibid.). Similar effects have been documented
among public managers and employees (Belardinelli et al. 2018; Bellé, Canta-
relli, and Belardinelli 2018).
Equivalence framing effects are thought to emerge because of differences
in the associations that are unconsciously activated in people’s working
memory in reaction to information, depending on how the information is
worded (Druckman 2004). For instance, when people evaluate a hospital
26
based on the number of satisfied users, the positive labelling of the infor-
mation (the word “satisfied”) tends to activate positive considerations outside
of people’s awareness, which in turn affect their mindset in a positive direc-
tion. In contrast, when people evaluate the hospital based on the number of
dissatisfied users, the negative labelling of the information (the word “dissat-
isfied”) tends to activate negative thoughts, thereby affecting people’s mindset
in a negative direction.
Recently, equivalence framing effects have been documented on elected
policymakers’ risk preferences using replications of Tversky and Kahneman’s
Asian Disease problem (Sheffer et al. 2017; Linde and Vis 2017). Furthermore,
results have suggested that policymakers’ risk preferences become less vulner-
able to the framing of outcomes if the policymakers are familiar with the type
of decision at hand (Linde and Vis 2017). However, no one has studied equiv-
alence framing effects on policymakers’ support for actual policies for which
the policymakers are responsible in their own jurisdictions and may have at-
titudes about.6 Based on the literature, there is reason to expect that policy-
makers will tend to make more favorable evaluations of positively framed pol-
icy-relevant information than of logically equivalent but negatively framed in-
formation, although this expectation is weakened by the results of Linde and
Vis (ibid.).
2.4.2. Issue framing effects
While equivalence framing effects occur because of different but logically
equivalent wordings of information, issue framing effects occur when “by em-
phasizing a subset of potentially relevant considerations, a speaker leads indi-
viduals to focus on these considerations when constructing their opinions,”
which in turn affects people’s preference formation based on the information
(Druckman 2004, 672). Thus, issue framing effects occur because emphasis
has been placed on qualitatively different aspects of an issue.
Issue framing effects were e.g. demonstrated in the Ku Klux Klan study by
Nelson, Clawson, and Oxley (1997) where university students were more will-
ing to let hate groups like Ku Klux Klan rally at their university if the question
was framed as a matter of free speech rather than of risk to public order. In
6 In their experiments, Linde and Vis (2017) actively attempted to prevent ideology
from interfering with their results (ibid., 111) and therefore refrained from describing
actual policies for which their respondents could have preferences. However, I think
that because of the central role of ideology and preferences in policymaking, we have
to allow for the interference of these factors in our results in order to investigate psy-
chological biases in policymaking.
27
relation to government spending, it has been demonstrated that “when gov-
ernment spending for the poor is framed as enhancing the chance that poor
people can get ahead, individuals tend to support increased spending. On the
other hand, when it is framed as resulting in higher taxes, individuals tend to
oppose increased spending” (Druckman 2001, 1043).
In contrast to equivalence framing effects that are thought to work through
processes outside of people’s awareness, cf. the section above, issue framing
effects are thought to occur because of more deliberate processes (Druckman
2001; 2004). When issue frames are used in the presentation of information,
the frames tend to make people reprioritize the relative importance of com-
peting considerations with relevance for the information at hand (ibid.). In the
Ku Klux Klan example, most people tend to support freedom of speech but
they also tend to find public order important. By highlighting one of these con-
siderations in relation to the choice to let (or not let) Ku Klux Klan rally at their
university, more people tend to make their decision based on that considera-
tion.
In the existing literature, issue framing effects are well established among
citizens, and scholars have worried that political elites “face few constraints to
using frames to influence and manipulate citizens' opinions” (Druckman
2001, 1041). As we argue in article E (“Politicians and Bureaucrats”), there is
reason to expect that policymakers will be susceptible to issue framing as well
and that they will therefore tend to make different evaluations of information
depending on the information-related considerations that are emphasized in
the presentation of the information.
2.4.3. Source cue effects
A third way in which information providers may be able to affect policymak-
ers’ interpretation of policy-relevant information is by highlighting policy ad-
vocates that are either ideologically aligned or unaligned with the policymak-
ers when presenting the information to them. The literature on source cue ef-
fects has shown that when people make judgments based on information, their
judgments are often affected by signals from the social environment (Cohen
2003). If people are told how others interpret some information, this
knowledge tends to affect their own interpretation of the information.
Source cue effects may occur because of directionally motivated reasoning
resulting from people’s motivation to agree with certain groups of individuals
and disagree with other groups. This explanation is widespread in political
science literature on voters’ reactions to party cues. As mentioned in section
2.2, studies have found that voters are often motivated to make judgments
28
that are consistent with the judgments of policymakers from their own politi-
cal party (Campbell et al. 1960; Bolsen, Druckman, and Cook 2014; Leeper
and Slothuus 2014; Arceneaux and Vander Wielen 2017). However, an alter-
native explanation of source cue effects is that people use source cues as heu-
ristics that allow them to make sense of information without analyzing the in-
formation thoroughly. According to this idea, if people learn about the judg-
ments of other groups of well-informed individuals who have the same politi-
cal goals and values as themselves (Gilens and Murakawa 2002, 31), mimick-
ing these judgments will often lead to “approximately rational” behavior, even
if people are not well informed themselves (Sniderman, Brody, and Tetlock
1991, 165).
Existing political science literature on source cues has primarily focused
on parties (and thus, policymakers) as cue senders. However, as we argue in
article E, policymakers may also be susceptible to source cue effects, and there
is therefore reason to expect that they will tend more often to evaluate policy-
relevant information in support of policies if they are told that the policies are
advocated for by groups with whom they are ideologically aligned.
2.4.4. Order effects
In addition to the tests of equivalence framing, issue framing, and source cue
effects, the dissertation’s investigation of biases resulting from factors related
to the presentation of information includes a test of order effects on judgments
based on sequences of information. Order effects are evident when people
evaluate sequences of information in systematically different ways, depending
on the order in which the information is being presented (Hogarth and
Einhorn 1992). Order effects have been investigated in a variety of domains,
such as consumer choices (Bond et al. 2007; Carlson, Meloy, and Russo 2006),
jury verdicts (Tetlock 1983; Davis 1984), and political campaigning (Chong
and Druckman 2010). The literature identifies two prominent kinds of order
effects.
Some studies have suggested that when people make judgments based on
sequences of information, first impressions based on early pieces of infor-
mation tend to have larger effects on judgments than information encoun-
tered later in the sequence (Bond et al. 2007; Carlson, Meloy, and Russo 2006;
Nickerson 1998; Tetlock 1983). These studies suggest that people form initial
dispositions based on early pieces of information, and that these dispositions
bias evaluations of subsequent information. If people form a favorable first
impression of some object of evaluation, they will tend to make more favorable
evaluations of subsequent pieces of information about that object of evalua-
29
tion, which, in the end, will lead them to make more favorable overall evalua-
tions (Bond et al. 2007). Scholars use the term “primacy effects” to describe
the disproportional impact of first impressions based on early pieces of infor-
mation.
Other studies have found exact opposite effects where last impressions
based on later pieces of information are disproportionately influential on
judgments (Davis 1984; Hogarth and Einhorn 1992). This can be explained
with limitations in the human working memory, meaning that people have to
base their judgments on whatever considerations are accessible in their mind
at any given point in time. Zaller notes that “the more recently a consideration
has been called to mind or thought about, the less time it takes to retrieve that
consideration or related considerations from memory and bring them to the
top of the head for use” (Zaller 1992, 48). Scholars use the term “recency ef-
fects” to describe the disproportional impact of last impressions based on later
pieces of information.
There is a potential for order effects on policymakers’ decision-making
whenever they encounter multiple pieces of policy-relevant information in se-
quence. This is usually the case as policymakers are rarely able to form well-
informed attitudes based on single pieces of information. For instance, public
organizations tend to pursue a variety of goals (Boyne 2002; Andrews, Boyne,
and Walker 2006; Kaplan and Norton 1992; Song and Meier 2018; Chun and
Rainey 2005; Holm 2018), and if policymakers encounter information on, say,
school policies, they will have to take into account a variety of outputs and
outcomes. Based on the literature, we expect in article F (“Order Effects”) that
order effects will be evident in evaluations of sequences of information, but we
do not have a priori expectations regarding the predominance of either pri-
macy effects or recency effects.
31
Chapter 3. Empirical strategy
In this chapter, I introduce my empirical strategy for investigating Chapter 2’s
theoretical expectations. I make two important and cross-cutting design
choices. In section 3.1, I argue that survey experiments are useful for testing
the dissertation’s expectations, and in section 3.2, I choose the empirical set-
ting for my experiments. I discuss the need for direct investigations of actual
policymakers and argue that by studying policymakers at the local level it is
possible to conduct large-n investigations, even with relatively low response
rates that must be expected when studying political elites. I end the chapter
with an overview of the data collections carried out for the purpose of my in-
vestigation. Due to the focus on cross-cutting design choices, the chapter does
not include a presentation of the specific designs that have been used to test
the theoretical expectations. For information about specific designs, I refer to
Chapter 4 and the dissertation’s articles.
3.1. Research design The aim of this dissertation is to investigate psychological biases in policymak-
ers’ interpretations of policy-relevant information. Specifically, in order to test
the theoretical expectations in Chapter 2, I need to investigate causal effects
of attitudes and beliefs on policymakers’ interpretation of varying amounts of
evidence and with and without requirements to justify their interpretation.
Furthermore, I need to investigate causal effects of factors related to the
presentation of the information. This is not straightforward as I have to ad-
dress at least two major challenges.
My first major challenge concerns how to observe policymakers’ interpre-
tation of policy-relevant information in a way that allows me to test my theo-
retical expectations. One possible strategy for gathering insights of relevance
to the theoretical expectations could be to ask policymakers about their own
experiences, for example through qualitative interviews or surveys with ques-
tions about the issue. However, this strategy would be problematic given that
my expectations concern psychological processes that policymakers may not
be aware of (Taber and Lodge 2016; Levin, Schneider, and Gaeth 1998;
Druckman 2001). Therefore, in order to cast light on psychological biases, I
need to observe policymakers’ interpretation of actual information and ana-
lyze this interpretation against the empirical implications of my theoretical
expectations. This task is complicated by the fact that different policymakers
32
make decisions about different issues and are therefore exposed to very differ-
ent (and not necessarily comparable) information. I have therefore made it a
priority to create a controlled environment where I can observe a sufficient
number of policymakers’ interpretations of comparable pieces of information.
My second challenge concerns causality as I need to be able to rule out
alternative explanations of behavior consistent (or inconsistent) with my the-
oretical expectations. For instance, policymakers’ interpretations of a piece of
information might very well vary with information-related political attitudes,
not because of (in)congruence between these attitudes and the information
(which is what I expect in section 2.2) but because of other, potentially un-
measured, factors correlating with the political attitudes. This could happen,
for example, if there is a relationship between policymakers’ education and
their attitudes, and if education affects how policymakers interpret infor-
mation. In that case, differences in interpretation might emerge because of
differences in education and not because attitudes bias the interpretation.
Furthermore, providers of information might choose to design policy-relevant
information in certain ways in anticipation of policymakers’ reactions. For in-
stance, in preparing information with relevance to a given policy, information
providers might frame the information differently depending on policymak-
ers’ stated attitudes towards that policy. In that case, it would not be the
presentation of information that affects how the information is interpreted
and used (which is what I expect in section 2.4) but rather the (expected) in-
terpretation and use that affect the presentation of the information, meaning
that there would be a situation of reversed causality (Blom-Hansen, Morton,
and Serritzlew 2015; James, Jilke, and Van Ryzin 2017).
In order to address these challenges, I have chosen to base the disserta-
tion’s articles on survey experiments. As I discuss in Chapter 5, survey exper-
iments are not free of problems (for instance, they often involve quite artificial
settings, meaning that they can be criticized in terms of their ecological valid-
ity), but the method is ideal in terms of dealing with the challenges discussed
above. Survey experiments allow me to observe a large number of policymak-
ers’ interpretations of comparable pieces of information in a controlled envi-
ronment, which can be designed for the explicit purpose of testing my theo-
retical expectations. Furthermore, the method allows me to create exogenous
variation in the factors that I expect to affect policymakers’ interpretations,
meaning that the causal interpretation of my results becomes much more
straightforward. In Chapter 4, I return to the specific tests of my theoretical
expectations but first, in the section below, I choose the empirical setting in
which to test my expectations.
33
3.2. Empirical setting and data
3.2.1. Don’t policymakers behave like other human beings?
I am by no means the first to argue that insights about psychological biases
are important for understanding policymakers’ behavior. For example, Inter-
national Relations scholars have a long tradition of drawing on psychological
insights in analyses of state leaders’ foreign policy decision-making (Jervis
1976). The latest edition of The Oxford Handbook of Political Psychology de-
votes an entire section to this, including analyses of psychological biases as a
source of intelligence failures and faulty threat perceptions (Levy 2013; Stein
2013; see also Lake 2011). Furthermore, some literature on evidence-based
policymaking has pointed to psychological biases as an obstacle to factually
informed policymaking (Bartels and Bonneau 2014), and Public Administra-
tion scholars are becoming increasingly aware of the importance of psycho-
logical insights, including insights about psychological biases, in explaining
different kinds of decision-making (Grimmelikhuijsen et al. 2017). Recently,
the Behavioural Insights Team in London even released an entire report about
the impact of psychological biases on policymaking (Hallsworth et al. 2018).
However, while existing literature has contributed importantly by point-
ing to the relevance of psychological biases in policymaking, few direct inves-
tigations have been made of psychological processes among actual policymak-
ers (for important exceptions, see Sheffer et al. 2017; Linde and Vis 2017).
From a practical perspective, it is notoriously difficult to collect useful data
among actual policymakers because, compared to other pools of potential sub-
jects, the population is limited in size and policymakers are often reluctant to
participate as experimental subjects (Druckman and Lupia 2012, 1178). From
a theoretical perspective, behavioral scientists often assume (explicitly or,
more often, implicitly) that their research is about general human behavior,
meaning that “the findings one derives from a particular sample [of subjects]
will generalize broadly; one adult human sample is pretty much the same as
the next” (Henrich, Heine, and Norenzayan 2010, 63). As a result, researchers
with an interest in the behavior of elite policymakers have “[adopted] termi-
nology and insights obtained from studies conducted with nonelite samples”
(Sheffer et al. 2017, 2) assuming that if ordinary citizens’ reasoning is distorted
by psychological biases, the same will be the case among policymakers.
I find this shortage of empirical evidence to be problematic. Although
there are certainly factors, including psychological factors, regarding which it
34
would not seem plausible to expect systematic differences between policymak-
ers and other human beings,7 and although it is definitely reasonable to expect
some similarities between policymakers and other human beings with regard
to psychological biases, it is also clear that there is something special about
policymakers. I find reason to expect that policymakers, on average, will vary
from other citizens with respect to at least three important factors that may
very well affect how they reason about information. In the sections below, I
discuss how policymakers’ reasoning may be affected by their political exper-
tise, by their passionate relationship to their attitudes, and by the large real-
world impact of their decision-making.
3.2.2. Policymakers are political experts
First, it can be argued that policymakers are better off than ordinary citizens
in terms of navigating through the often large amounts of complex infor-
mation that is needed to form a factually informed attitude towards a given
policy. Being evidence-based in one’s attitude formation requires a lot of time
and effort, and because it is not feasible for most people to thoroughly evaluate
a lot of information (Downs 1957), they have to rely on reasoning shortcuts to
simplify the world, although this can result in faulty judgments. However, pol-
icymakers devote themselves professionally to politics, meaning that they
have much more time to navigate through policy-relevant information (it is
their job to do so), and they tend to build up a political expertise, making it
easier to make sense of policy-relevant information (Zaller 1992, 6). In that
sense, policymakers can be seen as professional information users who
should, ideally, hold higher standards of facts than other citizens, and who
might therefore, on average, engage in more nuanced reasoning. In accord-
ance with this argument, some literature has suggested that political expertise
tends to limit the impact of e.g. equivalency frames on people’s political pref-
erence formation (Druckman 2004).
The difference between ordinary citizens’ and elite policymakers’ political
expertise is central to the literature on political heuristics. As noted in section
2.4.3, scholars argue that citizens can take advantage of elite cues to compen-
sate for their lack of information and “be knowledgeable in their reasoning
about political choices without necessarily possessing a large body of
knowledge about politics” (Sniderman, Brody, and Tetlock 1991, 19). Assum-
ing that citizens are able to identify policymakers who share their political
7 For example, I would find it odd to expect that policymakers do not share some
basic psychological needs (e.g. needs for social belongingness, autonomy, compe-
tence etc.) with other people (Maslow 1943; Gagné and Deci 2005).
35
goals, and that they are able to identify these policymakers’ policy endorse-
ments (Gilens and Murakawa 2002, 31), they should be able to use this to
make “choices that are approximately rational” (Sniderman, Brody, and
Tetlock 1991, 165). This line of reasoning, of course, assumes that elite policy-
makers are able to digest information in a way that, to use Sniderman, Brody,
and Tetlock’s words, is “approximately rational” (ibid.).
However, some literature suggests that political expertise may strengthen
rather than weaken certain psychological biases. For instance, it has been ar-
gued that political knowledge can serve as “ammunition with which to coun-
terargue incongruent facts, figures, and arguments” (Taber and Lodge 2006,
757). This may make knowledgeable people more resistant to persuasive at-
tempts (Zaller 1992), meaning that there may e.g. be weaker framing effects
among political experts.8 However, the improved ability to counterargue in-
congruent facts, figures, and arguments also means that political experts are
better off in terms of defending desired conclusions based on information.
Thus, political knowledge has been found to magnify people’s tendency to en-
gage in directionally motivated reasoning (Taber and Lodge 2006; Taber,
Cann, and Kucsova 2009).
3.2.3. Policymakers are passionate about their attitudes
Another personal characteristic on which policymakers may vary from other
citizens, and which may also affect their reasoning, is attitude importance de-
fined as “the degree to which a person is passionately concerned about and
personally invested in an attitude” (Krosnick 1990, 60). Policymakers have
chosen a career path where attitudes play a crucial role. It is their job to form
attitudes and to fight for these attitudes in their decision-making. I therefore
find it reasonable to expect that policymakers, on average, will feel more pas-
sionate about their attitudes than other citizens. Passionately held attitudes
have been found to evoke strong emotional reactions in people, and people
tend to be more motivated to defend such attitudes than less passionately held
attitudes (Eagly and Chaiken 1993). As a result, attitude importance has been
linked to weakened framing effects (Druckman and Nelson 2003) but also to
increased tendencies to engage in directionally motivated reasoning (Taber
and Lodge 2006; Taber, Cann, and Kucsova 2009).
8 Note, however, that it has been suggested that while political expertise may weaken
the effects of equivalence framing, political experts may be more affected by issue
frames as knowledgeable people are better at connecting their opinions and the con-
siderations suggested by frames (Nelson, Oxley, and Clawson 1997; Druckman and
Nelson 2003).
36
3.2.4. Judgmental mistakes matter in policymaking
Political expertise and attitude importance are personal characteristics that
may affect policymakers’ reasoning about information. In addition to these
personal characteristics, policymakers vary from other citizens in the role they
serve in society and this, as well, may affect their tendency to be affected by
psychological biases. For instance, policymakers differ from most other hu-
man beings with respect to the societal impact of their decision-making, and
there is reason to believe that this will affect their mindset when they approach
decision-relevant information. Dan Kahan writes about ordinary citizens that:
On most of the policy-relevant facts (…) an ordinary person’s “beliefs” are of no
policy significance. She just does not matter enough as a consumer, voter,
participant in public deliberations, and so on, to affect the incidence of the risk
in question (say, climate change as a result of human CO2 emissions) or the
adoption of any policy to reduce it (say, enactment of a carbon tax). Accordingly,
any “mistake” someone makes in acting on mistaken beliefs about those facts
will be costless in that regard (Kahan 2016b, 4).
Because of the limited societal impact of most citizens’ actions, at least at the
individual level, citizens have a limited incentive to invest cognitive effort in
thorough, unbiased evaluations of policy-relevant information. However, the
same cannot be said about policymakers as they are at the center of decision-
making processes with huge importance to many people. They have been
elected to be the ones who make decisions about, say, whether or not to enact
a carbon tax, and therefore, any “mistake” a policymaker makes in acting on
mistaken beliefs about policy-relevant facts will not be costless. Following Ka-
han’s line of reasoning, policymakers should feel a responsibility to invest cog-
nitive effort in thorough, unbiased evaluations, meaning that they might ap-
proach policy-relevant information with another mindset than other citizens.9
9 The idea that professional roles may matter to peoples’ reasoning has been aired by
Kahan (2016b) who argued that professionals may develop “habits of mind, acquired
through training and experience, distinctively suited to specialized decision-making”
which may in turn insulate them from certain psychological biases (Ibid., 8). The
idea has found support in a study of politically motivated reasoning among judges,
where Kahan et al. (2016) found that state judges from the USA were, like members
of the general public, affected by their ideological worldviews when asked about is-
sues like climate change and marijuana legalization. However, when asked to re-
spond to statutory interpretation problems, the judges differed from the general
public by not being affected by ideology, suggesting that they entered a professional,
impartial mindset when making decisions within their professional domain (i.e. de-
cisions involving legal reasoning).
37
3.2.5. Choice of empirical setting and data used in the dissertation
It is clear from the sections above that there are arguments pointing in differ-
ent directions with regard to psychological biases among policymakers. Some
arguments suggest that policymakers will tend to be more affected by psycho-
logical biases than other citizens in their interpretation of policy-relevant in-
formation. If this is the case, we can learn a lot about psychological biases in
policymaking by studying the phenomenon among ordinary citizens; the re-
sults will simply tend to be conservative. However, other arguments suggest
that policymakers will be less biased, and in that case, generalizing from citi-
zen behavior to policymakers will be problematic. In any case, based on the
existing evidence, we cannot take for granted that policymakers behave like
other citizens. Druckman and Lupia argue that “typical experimental subjects
often lack the experience needed to act “as if” they were professional legisla-
tors” (Druckman and Lupia 2012, 1178). To cast light on psychological biases
in policymaking, I have therefore made it a top priority to collect data among
elected policymakers.
As I noted in section 3.2.1, however, collecting useful data on elected poli-
cymakers is difficult from a practical perspective. While thousands of citizen
responses can easily be collected within days or even hours, there is only a
limited number of policymakers and many of them are reluctant to participate
as research subjects (ibid.). Therefore, studies of psychological biases among
policymakers are typically based on a very limited number of respondents. For
instance, Linde and Vis (2017) surveyed 46 Dutch members of parliament, and
Sheffer and colleagues (2017) surveyed samples of Canadian, Belgian, and Is-
raeli members of parliament with 18 ≤ n ≤ 113. Very small sample sizes would
be problematic to my investigation given that, as I argued in the sections
above, policymakers may evaluate policy-relevant information in ways that
are more nuanced than suggested by existing literature on psychological bi-
ases. I need quite large sample sizes to be able to identify even small effects.
Following the considerations above, I have chosen to focus on policymak-
ers at the local level, which seems like an ideal pool of respondents in terms of
balancing the need for real policymakers and the need for large samples. As
will be clear below, my most important source of data is city councilors in Den-
mark. There are 2,445 city councilors in Denmark, which means that large
samples can be expected, even with relatively low response rates.10 Danish city
10 My dissertation contains answers from around 1,000 Danish city councilors, which
is equal to around 40 percent of all Danish city councilors. If I had chosen to study
Danish members of parliament, a response rate of 100 percent would only produce
179 responses.
38
councilors are elected through municipal elections every four years. Municipal
elections are characterized by professionalized election campaigns (Hansen
2017), extensive media coverage (Albæk and Andersen 2013; Elmelund-
Præstekær and Skovsgaard 2017) and relatively high voter turnout, fluctuating
around 70 percent (Hansen 2018). About 95 percent of the city councilors rep-
resent political parties that also compete over power in the national parlia-
ment.11 The city councilors are responsible for the local delivery of core welfare
services, such as public schools, childcare, elderly care, and employment ac-
tivities, and the municipal budgets represent about half of the entire public
expenditures in Denmark. In that sense, while city councilors may not be as
professional political actors as e.g. members of national parliaments, they are
real policymakers who have been elected to make decisions of enormous im-
portance to the citizens in their jurisdictions.
Table 2 summarizes the data collections that have been made for the dis-
sertation’s articles. In addition to the data on Danish city councilors, the dis-
sertation contains data collected among city councilors from other countries
and among (non-policymaker) citizens. In article E (“Politicians and Bureau-
crats”), we have as a high priority to address the external validity of our claims
and therefore run our experiments on city councilors in Denmark, Belgium,
Italy, and the USA. Citizen data are used differently across articles. In articles
B and C (“Role of Evidence” and “Justification Requirements”), representative
samples of citizens are used in explorative analyses of differences between pol-
icymakers’ and citizens’ behavior, whereas articles D and F (“Biased, Not
Blind” and “Order Effects”) are solely based on citizen data. In later chapters,
I will discuss the relevance of articles D and F in relation to policymaking, but
following the arguments above, it is clear that caution is needed when gener-
alizing from these articles’ results to the behavior of actual policymakers.
11 The remaining councilors represent local parties or have been elected as individu-
als without affiliation to a political party.
39
Ta
ble
2:
Ov
erv
iew
of
da
ta c
oll
ecti
on
s in
dis
sert
ati
on
No
. U
sed
in
art
icle
D
escr
ipti
on
R
esp
on
den
ts
Da
ta c
oll
ecte
d
1*
A,
B,
an
d C
E
xp
erim
ents
tes
tin
g f
or
po
liti
call
y m
oti
vate
d r
easo
nin
g,
incl
ud
ing
m
od
era
tin
g e
ffec
ts o
f in
form
ati
on
lo
ad
an
d j
ust
ific
ati
on
req
uir
emen
ts.
Da
nis
h c
ity
cou
nci
lors
(n
= 9
88
) N
ove
mb
er-D
ecem
ber
2
014
2
C a
nd
E
Ex
per
imen
ts t
esti
ng
fo
r eq
uiv
ale
nce
fra
min
g e
ffec
ts,
issu
e fr
am
ing
eff
ects
, so
urc
e cu
e ef
fect
s, a
nd
eff
ects
of
just
ific
ati
on
req
uir
emen
ts o
n e
ffo
rt i
n s
earc
h
for
an
d e
valu
ati
on
of
info
rma
tio
n.
Da
nis
h c
ity
cou
nci
lors
(n
= 1
,02
5)
No
vem
ber
-Dec
emb
er
20
16
3
E
Ex
per
imen
ts t
esti
ng
fo
r eq
uiv
ale
nce
fra
min
g e
ffec
ts,
issu
e fr
am
ing
eff
ects
, a
nd
so
urc
e cu
e ef
fect
s.
Ita
lia
n c
ity
cou
nci
lors
(n
= 1
,75
6)
No
vem
ber
-Dec
emb
er
20
16
4
E
Ex
per
imen
ts t
esti
ng
fo
r eq
uiv
ale
nce
fra
min
g e
ffec
ts,
issu
e fr
am
ing
eff
ects
, a
nd
so
urc
e cu
e ef
fect
s.
Fle
mis
h c
ity
cou
nci
lors
(n
= 2
,25
7)
Feb
rua
ry-M
arc
h 2
017
5
E
Ex
per
imen
ts t
esti
ng
fo
r eq
uiv
ale
nce
fra
min
g e
ffec
ts,
issu
e fr
am
ing
eff
ects
, a
nd
so
urc
e cu
e ef
fect
s.
US
cit
y co
un
cilo
rs (
n =
88
4)
Ap
ril-
Ma
y 2
017
6
B
Ex
per
imen
ts t
esti
ng
fo
r p
oli
tica
lly
mo
tiva
ted
rea
son
ing
, in
clu
din
g
mo
der
ati
ng
eff
ects
of
info
rma
tio
n l
oa
d.
Rep
rese
nta
tive
sa
mp
le o
f D
an
ish
cit
izen
s (n
= 1
,00
6)
N
ove
mb
er 2
016
7
C
Ex
per
imen
t te
stin
g f
or
mo
der
ati
ng
eff
ects
of
just
ific
ati
on
req
uir
emen
ts o
n
po
liti
call
y m
oti
va
ted
rea
son
ing
. R
epre
sen
tati
ve s
am
ple
of
Da
nis
h c
itiz
ens
(n =
2,1
09
) D
ecem
ber
20
16
8
C
Ex
per
imen
t te
stin
g f
or
effe
cts
of
just
ific
ati
on
req
uir
emen
ts o
n e
ffo
rt i
n
sea
rch
fo
r a
nd
eva
lua
tio
n o
f in
form
ati
on
. R
epre
sen
tati
ve s
am
ple
of
Da
nis
h c
itiz
ens
(n =
1,0
63
) F
ebru
ary
20
17
9
D
Ex
per
imen
ts t
esti
ng
fo
r ch
oic
e-d
riv
en m
oti
vate
d r
easo
nin
g.
Po
liti
cal
Sci
ence
stu
den
ts
(n =
23
4)
Sep
tem
ber
20
15
10
D
Ex
per
imen
ts t
esti
ng
fo
r ch
oic
e-d
riv
en m
oti
vate
d r
easo
nin
g.
Po
liti
cal
Sci
ence
stu
den
ts
(n =
23
2)
Sep
tem
ber
20
16
11
D
Ex
per
imen
ts t
esti
ng
fo
r ch
oic
e-d
riv
en m
oti
vate
d r
easo
nin
g.
Po
liti
cal
Sci
ence
stu
den
ts
(n =
23
8)
Sep
tem
ber
20
17
12
F
Ex
per
imen
t te
stin
g f
or
ord
er e
ffec
ts.
Rep
rese
nta
tive
sa
mp
le o
f B
riti
sh c
itiz
ens
(n =
1,3
04
) D
ecem
ber
20
17
No
te:
I re
fer
to t
he
in
div
idu
al
art
icle
s fo
r m
ore
th
oro
ug
h d
escr
ipti
on
s o
f d
ata
co
llec
tio
n p
roce
du
res.
*:
Da
ta c
oll
ecti
on
no
. 1
wa
s c
arr
ied
ou
t in
ad
va
nce
of
my
P
hD
sch
ola
rsh
ip,
in c
oll
ab
ora
tio
n w
ith
Nie
ls B
jørn
Gru
nd
Pet
erse
n,
Ca
sper
Mo
nd
rup
Da
hlm
an
n,
Asb
jørn
Ho
vg
aa
rd M
ath
iase
n,
an
d M
art
in B
aek
ga
ard
.
41
Chapter 4. Results
Having decided on the dissertation’s overall empirical strategy, I now move on
to the specific tests of Chapter 2’s theoretical expectations. In sections 4.1 and
4.2, I present results from articles A-D about the impact of attitudes and be-
liefs on policymakers’ interpretation of information and moderating effects of
information load and justification requirements. In section 4.3, I present re-
sults related to equivalence framing effects, issue framing effects, source cue
effects, and order effects on policymakers’ interpretation of information.
4.1. Are policymakers biased by attitudes and beliefs? As stated in Chapter 2, I expect that policymakers will tend to engage in moti-
vated reasoning when interpreting policy-relevant information. Policymakers
will often have strong political attitudes towards the issues about which they
make policies, and I expect that they will tend to be motivated to defend these
attitudes. Furthermore, policymakers make important choices through their
job, and based on literature on choice-driven motivated reasoning, I find rea-
son to expect that policymakers will be motivated to defend these choices
when subsequently interpreting choice-related information.
4.1.1. Goal reprioritization in light of political attitudes
The first form of motivated reasoning that is tested for in the dissertation is
goal reprioritization in light of policymakers’ political attitudes. In section 2.2,
I expected that policymakers will tend to reweight the perceived importance
of individual pieces of ambiguous information in light of the (in)congruence
between the information and conclusions that the policymakers are motivated
to defend. In article A (“Goal Reprioritization”), we test this expectation on a
sample of Danish city councilors using an experimental design made for the
purpose of the test.
In our experiment, we asked respondents to evaluate a table with fictional
performance information about two schools. The table contained information
about test scores and student wellbeing at the two schools, and we asked the
respondents to evaluate, based on this information, which school performed
best overall. High test scores as well as high levels of student wellbeing are
desired school outcomes, but we designed the table’s information to be am-
42
biguous in the sense that the school that performed best on test scores per-
formed worst on student wellbeing, and the other way around. Thus, in order
to evaluate which school performed best, respondents had to choose which
indicator to give the highest priority, meaning that, by analyzing the respond-
ents’ answers, we could observe their prioritization between the two perfor-
mance indicators.
To test for goal reprioritization, we randomly assigned our respondents to
either a non-politicized or a politicized version of the information. In the non-
politicized version, the schools were presented with the sterile names “school
A” and “school B”; in the politicized version, respondents were told that the
schools were public and private. Within each condition, we further random-
ized which school performed best on what indicator, meaning that the exper-
iment had a total of four experimental conditions.12
The sterile, non-politicized conditions served as a baseline, allowing us to
observe respondents’ prioritization of test scores and student wellbeing in the
outset, that is, without a link between attitudes and information to distort the
prioritization. In comparison, by revealing the sector affiliation of the schools
in the politicized conditions, we expected to activate a motivation to defend
ideologically founded attitudes regarding public and private delivery of public
services. The relative role of the public and private sector in delivering public
services is highly contested in Danish politics, and Danish city councilors de-
cide on a regular basis whether to contract out services for which they are re-
sponsible in their municipalities. Most city councilors therefore have strong
attitudes towards the issue and according to the idea of goal reprioritization,
the councilors were expected to reprioritize the relative importance of test
scores and student wellbeing in light of these attitudes when evaluating the
politicized information. Our design allowed us to test this expectation by ana-
lyzing respondents’ prioritization of test scores and student wellbeing in the
baseline conditions compared to the politicized conditions, in light of their at-
titudes towards public and private service provision. Figure 2 shows the rela-
tionship between respondents’ attitudes13 and their predicted probability of
12 In one condition, school A performed best on test scores and worst on wellbeing,
in another condition, school B performed best on test scores and worst on wellbeing,
in a third condition, the public school performed best on test scores and worst on
wellbeing, and in a fourth condition, the private school performed best on test scores
and worst on wellbeing. 13 We measured respondents’ attitudes with three items from Baekgaard and
Serritzlew (2016) and constructed an index (called “pro public”) ranging from 0-1,
where higher values correspond to a stronger preference for the public sector.
43
giving the highest priority to student wellbeing in the four experimental con-
ditions.
Figure 2: Predicted probability of prioritizing wellbeing over test scores
Note: Brackets indicate 95 % confidence intervals. Reprint from article A (“Goal Reprioriti-
zation”).
As the left-most panels of Figure 2 show, city councilors who prefer the public
sector tend to find student wellbeing more important than test scores in our
experiment’s baseline conditions (that is, when the schools’ sector affiliation
is not revealed), whereas councilors who prefer the private sector tend to find
test scores more important. However, when respondents learn that the
schools are public and private, they reprioritize the goals, as expected, in de-
fense of their attitudes. Thus, when respondents learn that the public school
performs better on test scores and worse on wellbeing (group 3), the relation-
ship between preferring the public sector and giving high priority to student
wellbeing becomes negative. Now, the vast majority of the city councilors who
prefer the public sector begin to find test scores more important than student
wellbeing, whereas councilors who prefer the private sector begin to give
higher priority to student wellbeing. Similarly, when respondents learn that a
public school performs better on wellbeing and worse on academic achieve-
44
ments (group 4), the baseline condition’s positive relationship between pre-
ferring the public sector and giving high priority to student wellbeing becomes
even stronger.
4.1.2. Motivated numeracy in light of political attitudes
The evidence from article A shows a tendency among policymakers to engage
in goal reprioritization in defense of political attitudes when interpreting am-
biguous information. As I argued in section 2.2, however, I do not expect ac-
cess to ambiguous information to be necessary for policymakers to engage in
motivated reasoning. In situations where policymakers face evidence that un-
ambiguously supports one conclusion, I expect them to misinterpret the evi-
dence more often if it does not support their desired conclusion.
In the dissertation, this expectation is tested in four motivated numeracy
experiments (Kahan et al. 2017; Baekgaard and Serritzlew 2016) that have all
been run on samples of Danish city councilors and are reported in articles B
and C (“Role of Evidence” and “Justification Requirements”). In each experi-
ment, city councilors were presented with a table of fictional performance in-
formation about two public service providers and asked to evaluate, based on
the table, which provider performed best. As in article A’s goal reprioritization
experiment, respondents were randomly assigned to non-politicized baseline
conditions where the providers’ sector affiliation was not revealed or to polit-
icized conditions where one provider was labelled public and the other private.
Examples of the tables are shown in Figure 3 below, which shows material
from article C’s experiment. Here, the information was about user satisfaction
with different providers of elderly care.
As can be seen in Figure 3, the tables in the experimental material were
cognitively demanding, as they contained absolute numbers that were not in-
formative by themselves. In order to make sense of the information, satisfac-
tion rates needed to be computed. However, contrary to the information in
article A’s goal reprioritization experiment, the information was unambiguous
in the sense that one provider had a higher satisfaction rate than the other,
meaning that respondents’ answers could be coded as either correct or incor-
rect.14 Respondents could correctly choose the provider with the highest sat-
isfaction rate or they could misinterpret the information and choose the pro-
vider with the lowest satisfaction rate. In article B’s experiments, the tables
14 In Figure 3, provider A has the highest satisfaction rate in group A (83 percent vs.
74.4 percent satisfied users), provider B has the highest satisfaction rate in group B,
the municipal provider has the highest satisfaction rate in group C, and the private
provider has the highest satisfaction rate in group D.
45
concerned service areas other than elderly care,15 and the specific numbers in
the tables were different from the ones in Figure 3, but the experimental ma-
terial was designed based on the same logic. As in Figure 3, the tables in article
B’s experiments were unambiguous in the sense that answers could be coded
as either correct or incorrect, and respondents were randomly assigned to
non-politicized baseline versions of the information where the providers’ sec-
tor affiliation was not revealed16 or to politicized versions where one provider
was public and the other was private.
Figure 3: Experimental groups A-D in article C (“Justification Requirements”)
Number of
satisfied
users
Number of
dissatisfied
users
Number of
satisfied
users
Number of
dissatisfied
users
Supplier A 83 17 Supplier A
127 43
Supplier B 127 43 Supplier B
83 17
Group A Group B
Number of
satisfied
users
Number of
dissatisfied
users
Number of
satisfied
users
Number of
dissatisfied
users
Municipal
supplier 83 17
Municipal
supplier 127 43
Private
supplier 127 43
Private
supplier 83 17
Group C Group D
Note: Reprint from article C (“Justification Requirements”).
With this experimental design, it was possible to test for the expected bias in
policymakers’ interpretation by analyzing respondents’ ability to identify the
best performing provider in the politicized conditions compared to the base-
line conditions, in light of their attitudes towards public and private service
provision. In the baseline conditions (groups A and B in Figure 3), only the
respondents’ numeracy should affect their ability to correctly identify the best
15 The tables in article B concerned schools, rehabilitation centers, and providers of
road maintenance. 16 One of the dissertation’s four motivated numeracy experiments (experiment 3 in
article B) was designed to test for moderating effects of information load and did not
have non-politicized conditions (cf. section 4.2.1 of this summary report).
46
performing provider. On the contrary, in the politicized conditions (groups C
and D in Figure 3), there was a link between the information and respondents’
attitudes towards public and private service delivery. Attitudes towards public
and private service delivery were expected to bias policymakers’ ability to
identify the best performing provider in these experimental conditions cor-
rectly.
The results in articles B and C were overall supportive of this expectation.
In the baseline conditions, the vast majority of the city councilors were able to
identify the best performing service provider correctly, and there was no sig-
nificant association between the councilors’ attitudes and their ability to iden-
tify the best performing provider. However, when the city councilors were told
that the providers were public and private, their ability to identify the best
performing provider began to depend on the political convenience of the cor-
rect evaluation in light of their attitudes. Thus, in the politicized conditions,
city councilors were much better at identifying the best performing service
provider when the information supported their desired conclusion than when
the information challenged it. The overall pattern was the same in all four ex-
periments,17 and combined with the evidence of goal reprioritization in article
A, the results provide strong evidence of politically motivated reasoning.
Following the discussion in sections 3.2.1-3.2.4, it is interesting to make
some explorative comparisons between policymakers’ and ordinary (non-pol-
icymaker) citizens’ behavior. As mentioned in section 3.2.5, in addition to the
dissertation’s policymaker data, some data has been collected among citizens
as well. For instance, all four motivated numeracy experiments were run on
representative samples of Danish citizens as well, and interestingly, the im-
pact of attitudes on citizens’ evaluation was not found to vary significantly
from the impact of attitudes among policymakers (see article B’s appendix A2
and article C’s appendix A1). I return to the comparison of policymakers and
citizens in section 4.2 and in Chapter 5.
4.1.3. Choice-driven motivated reasoning
The evidence presented in sections 4.1.1 and 4.1.2 concerns politically moti-
vated reasoning in light of policymakers’ political attitudes. However, in Chap-
17 The dissertation’s articles contain a total of six statistical models comparing the
relationship between policymakers’ attitudes and their ability to correctly under-
stand politicized vs. non-politicized information. Four of these models provide sta-
tistically significant evidence of bias whereas in two models (model 4 in article B’s
Table 1 and model 1 in article C’s Table 1), coefficients point in the expected direction
but are not statistically significant.
47
ter 2, I argued that factors other than political attitudes may motivate policy-
makers to engage in biased reasoning as well. I argued that in addition to being
motivated to defend political attitudes, policymakers make important choices
as part of their job (allocate scarce resources, reform policies etc.) and that
they may be expected to attempt to defend these choices when subsequently
interpreting choice-related information.
The dissertation does not include a test for choice-driven motivated rea-
soning among policymakers, and I therefore do not test this expectation di-
rectly. However, I do provide some related evidence in article D (“Biased, Not
Blind”), where I set out to investigate motivated reasoning in a case where po-
litical attitudes are not relevant to people’s evaluations. In the article, I inves-
tigate choice-driven motivated reasoning in public service users’ evaluation of
performance information about their own service providers. I hypothesize
that when service users have chosen to use a certain service provider instead
of competing options, they will be motivated to defend their choice through
post-decisional motivated reasoning about choice-relevant information. Spe-
cifically, I hypothesize that service users will engage in goal reprioritization,
seeking to magnify the perceived importance of advantages associated with
their chosen provider (relative to alternative, unchosen options) while down-
playing the importance of relative disadvantages associated with that choice.
In the article, I report nine experimental tests of this hypothesis, which I
ran on samples of first-semester students at Aarhus University’s Department
of Political Science. In each test, respondents were presented with a table of
information about the performance of Aarhus University’s Political Science
degree program compared to that of another Danish university. They were
asked to rate the importance of each individual piece of information for an
overall evaluation of the educational quality in the two programs. I random-
ized whether Aarhus University was shown to perform better or worse than
the other university, and I expected respondents to find the information most
important when Aarhus University performed best. The results in article D
generally but weakly support my expectation. In all nine tests, respondents
did rate information as most important when Aarhus University performed
best compared to when a competing university performed best. However, the
effects were small and, in most tests, statistically insignificant.18 Even when I
pooled data in order to maximize the power of the studies, the results did not
become consistently significant.
18 One test revealed statistically significant results, two tests revealed marginally sig-
nificant results, and six tests revealed statistically insignificant results.
48
The results in article D give reason for some optimism. Findings of politi-
cally motivated reasoning have led researchers to fundamentally question de-
cision-makers’ ability to make factually informed decisions (Baekgaard and
Serritzlew 2016), and sections 4.1.1 and 4.1.2 suggest that this pessimism is
warranted when decision-makers have political attitudes towards the infor-
mation. However, if article D’s results can be generalized to policymakers, the
results suggest that motivated reasoning can, in some situations, be less dis-
tortive. Of course, following the discussion in section 3.2 about the need for
empirical studies of actual policymakers, a great deal of caution is needed in
terms of generalizing article D’s results to the behavior of policymakers. How-
ever, as I wrote in section 4.1.2, there was no significant difference between
policymakers and other citizens in terms of the tendency to misinterpret atti-
tude-congruent information. Based on this evidence, there is no reason to be-
lieve that policymakers are fundamentally different from citizens in terms of
their tendency to be biased in the outset. I return to this issue in Chapter 5.
4.2. Do contextual factors moderate the impact of political attitudes on policymakers’ interpretation of policy-relevant information? Until now, the focus has been on investigating whether prior attitudes and be-
liefs do or do not bias policymakers’ interpretation of policy-relevant infor-
mation. The evidence, which I presented above, did show that especially po-
litical attitudes are highly distortive to policymakers’ interpretation. Policy-
makers do tend to be motivated to interpret information in ways that support
their political attitudes, and they are often capable of doing so. Following these
results, an important question is under what conditions these biases are most
and least influential. Below, I present evidence on the effects of varying
amounts of policy-relevant information and of justification requirements on
policymakers’ interpretation of information.
4.2.1. Variations in information load
In section 2.3.1, I argued that while political attitudes tend to bias policymak-
ers’ interpretation of policy-relevant information, the attitudes should be ex-
pected to matter less when the policymakers are presented with large amounts
of evidence in favor of a given conclusion. In article B (“Role of Evidence”), we
test this expectation on a sample of Danish city councilors using an extension
of the motivated numeracy design, which I described in section 4.1.2.
49
In our experiment, we presented respondents with a table of fictional per-
formance information about two rehabilitation centers and asked the re-
spondents to evaluate, based on the table, which center performed best. As in
Figure 3 (see section 4.1.2), the information was cognitively demanding but
unambiguous in the sense that responses could be coded as either correct or
incorrect. The table reported the number of successful and unsuccessful treat-
ments at the two rehabilitation centers, and if respondents calculated success
rates, they would find that one center had a success rate of 83 percent whereas
the other had a success rate of 75 percent. Our experiment differed from the
design described in section 4.1.2 by not having any non-politicized baseline
conditions. All respondents were told that the centers were public and private,
and we randomized whether the public or private center was shown to perform
best. However, in order to test for the expected effect of varying amounts of
evidence, we randomized for each respondent whether the table contained in-
formation about one, three, or five kinds of treatments. In all conditions, one
provider performed best in relation to all kinds of treatments (meaning that
one provider performed better than the other on one, three, or five indicators).
Our theoretical expectations led us to expect that attitudes towards public
and private service delivery would bias policymakers’ ability to identify cor-
rectly the best performing rehabilitation center in the outset, but that the im-
pact of attitudes would decrease when respondents were presented with larger
amounts of evidence. Surprisingly, however, our results showed that the larger
amount of evidence increased rather than decreased the impact of attitudes.
As Figure 4 below shows, city councilors’ tendency to misinterpret attitude-
incongruent information while interpreting attitude-congruent information
correctly became stronger when more evidence was included in the table. The
difference from one to three pieces of evidence was statistically insignificant,
but the difference from one to five pieces of evidence was significant. Our data
did not allow us to investigate reasons for these surprising results, but a plau-
sible explanation is that we may have created an experience of information
overload (Schick, Gordon, and Haka 1990; Eppler and Mengis 2004) by in-
creasing the amounts of evidence in the tables. In other words, the infor-
mation may have become too overwhelming to comprehend (with the increas-
ing amount of information, not only one but three or five calculations needed
to be made in order to make sense of the information), and respondents may
have reacted by drawing on attitude-related cues instead of trying to do the
calculations.
50
Figure 4: Information load, attitudes, and correct interpretations
Private rehabilitation center best Public rehabilitation center best
Note: Estimated relationships with 95% confidence intervals. The x-axis runs from 0-1 with
higher values corresponding to stronger support for public sector delivery of services. Re-
print from article B (“Role of Evidence”).
As mentioned in section 4.1.2, the dissertation’s motivated numeracy experi-
ments, including the one testing for effects of varying amounts of evidence,
were run on representative samples of Danish citizens as well. In article B,
changing the amount of evidence to be evaluated did not lead to any significant
change (neither a decrease, nor an increase) in the impact of attitudes among
.4.5
.6.7
.8.9
Lik
elih
ood
of co
rrect in
terp
reta
tion
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Panel 5a: One piece of information received
.2.4
.6.8
1
Lik
elih
ood
of co
rrect in
terp
reta
tion
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Panel 5d: One piece of information received
.2.4
.6.8
1
Lik
elih
ood
of co
rrect in
terp
reta
tion
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Panel 5b: Three pieces of information received
.2.4
.6.8
1
Lik
elih
ood
of co
rrect in
terp
reta
tion
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Panel 5e: Three pieces of information received
.2.4
.6.8
1
Lik
elih
ood
of co
rrect in
terp
reta
tion
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Panel 5c: Five pieces of information received
.2.4
.6.8
1
Lik
elih
ood
of co
rrect in
terp
reta
tion
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Panel 5f: Five pieces of information received
51
citizens, unlike the evidence among our policymaker respondents. This sug-
gests that there may be some differences between the behavior of policymak-
ers and the behavior of other citizens, but the difference between citizens’ and
policymakers’ reaction to the increasing amount of evidence was not statisti-
cally significant (see article B’s appendix A3).
The results suggest that policymakers’ ability to make factually informed
decisions can be improved if they are not overloaded with policy-relevant in-
formation. Furthermore, based on the results, there is reason to believe that
attitudes may matter less in policymakers’ interpretation of information if the
information is presented in more easily comprehendible (less cognitively de-
manding) ways (Eppler and Mengis 2004), but this is a question for future
research to investigate.
4.2.2. Justification requirements
In section 2.3.2, I argued that policymakers should be expected to engage in a
more effortful search for and processing of policy-relevant information when
they are asked to justify their interpretation of the information compared to
when they are not asked to justify their interpretation. Furthermore, I ex-
pected political attitudes to matter less to policymakers’ interpretation of pol-
icy-relevant information when the policymakers are asked to justify their in-
terpretation of the information. In article C (“Justification Requirements”), I
test these expectations on samples of Danish city councilors, using a decision
board experiment and an extension to the motivated numeracy design, which
I described in section 4.1.2.
The decision board experiment, which I ran using the open source tool
MouselabWEB (Willemsen and Johnson 2011), was designed to test for effects
of justification requirements on respondents’ effort in searching for and pro-
cessing information. I asked respondents to click through a homepage with
ten boxes of information about the performance of two schools on a variety of
performance indicators and to evaluate, based on the information in the
boxes, which school performed best overall. In order to see the information in
each box, respondents had to click on the box, and the information remained
visible as long as the respondents’ cursor was placed over the box. Because
respondents had to click through the decision board’s information, the tech-
nique made it possible to track the respondents’ behavior when searching for
and processing the information. Respondents were free to click through as
many of the boxes as they liked and to spend as much time on the information
as they liked. I thereby used the number of opened boxes as a measure of effort
52
in respondents’ search for information, and I used the time spent on the in-
formation as a measure of effort in respondents’ processing of the infor-
mation.
In order to test for effects of justification requirements, I randomly allo-
cated respondents to either a control group or a treatment group. I asked both
groups to evaluate which school performed best, but in addition, I asked the
treatment group to justify their evaluation through a written argument for the
evaluation. My interest was not in respondents’ answers but in the impact of
the justification requirement on respondents’ behavior before answering the
question. The experimental results were mixed but overall supportive of the
expected effect of the justification requirement. There was no significant effect
on the number of boxes opened, but the justification requirement did make
respondents spend more time on the boxes they opened.
In order to test for the expected effect of justification requirements on the
distortive impact of political attitudes, I added two experimental conditions to
the motivated numeracy experiment, which I summarized in Figure 3 (see sec-
tion 4.1.2). I asked respondents in these justification conditions to evaluate
information about a public and a private provider of elderly care (identical to
the information in Figure 3’s groups C and D). In addition, I asked the re-
spondents in these conditions to justify their evaluation through a written ar-
gument for the evaluation, meaning that I could investigate the effect of the
justification requirement by comparing respondents’ answers in Figure 3’s
groups C and D to the answers in the justification conditions. According to my
expectation, attitudes should affect respondents’ answers less in the justifica-
tion conditions than in Figure 3’s groups C and D. However, as was also the
case with variations in information load, the city councilors reacted contrary
to my expectation. Instead of becoming less affected by their information-re-
lated attitudes, the councilors became significantly more affected by these at-
titudes when they were asked to justify their evaluation.
As previously noted, I ran article C’s experiments on representative sam-
ples of Danish citizens as well and interestingly, among these citizens, reac-
tions to the justification requirements conformed more to my expectations in
Chapter 2. The citizens tended to engage in slightly more effortful processing
of (but not search for) information when they were asked to justify their eval-
uation, and the justification requirement caused the citizens to be less affected
by their attitudes in the motivated numeracy experiment. The citizens’ reac-
tions to the justification requirements were only marginally significant (p <
0.10), but the difference between the policymakers’ and citizens’ reactions in
the motivated numeracy experiment was statistically significant (see article
C’s appendix A3). I return to possible reasons for this difference between pol-
icymakers’ and citizens’ behavior in Chapter 5 where I also discuss the broader
53
implications of the results for future research on the behavior of political
elites.
4.3. The impact of how information is presented The results in sections 4.1 and 4.2 all concern the impact of prior attitudes and
beliefs on policymakers’ interpretation of policy-relevant information. I now
move on to the impact of how information is presented. As discussed in Chap-
ter 2, the literature has shown that even small changes in the presentation of
information can lead to systematic and quite substantial changes in people’s
interpretation of the information. Below, I present evidence from articles E
and F (“Politicians & Bureaucrats” and “Order Effects”) about equivalence
framing effects, issue framing effects, source cue effects and order effects on
policymakers’ interpretation of information.
4.3.1. Equivalence framing effects
In section 2.4.1, I argued, based on the literature on equivalence framing, that
policymakers on average should be expected to make more favorable evalua-
tions of positively framed policy-relevant information than of logically equiv-
alent but negatively framed information. In article E (“Politicians & Bureau-
crats”), we test this expectation on samples of city councilors from Denmark,
Italy, Belgium, and the USA. In our test, we asked respondents to read a policy
proposal and indicate to what extent they agreed or disagreed with the pro-
posal. Like in the experiments in sections 4.1 and 4.2, the experimental mate-
rial concerned a policy area for which the respondents were actually responsi-
ble in their councils. In other words, the experiments were not identical across
countries due to differences in policy portfolios. In Denmark, Belgium, and
Italy, the policy proposal concerned a limitation in the number of manned
hours at the municipality’s libraries; in the USA, it concerned a limitation in
walk-in hours at the local police station.
In order to test for equivalence framing effects, respondents were ran-
domly assigned to one of three experimental conditions. In a positive framing
condition, respondents were told that a neighboring municipality had made a
similar change a year ago and that 60 percent of the citizens had here been
satisfied with the change. In a negative framing condition, respondents were
told that a neighboring municipality had made a similar change a year ago and
that 40 percent of the citizens had been dissatisfied with the change. Finally,
to test whether differences between the preference formation in the positive
and negative framing conditions were primarily driven by the positive or the
negative framing, the policy proposal was presented without further infor-
mation in a baseline condition.
54
The results of the different versions of the experiment are reported in Fig-
ure 5 below. Although there were some differences in effect sizes across coun-
tries and although it varied whether effects were mainly driven by the positive
or the negative framing, our results strongly supported the expectation of
equivalence framing effects. In all countries, support for the policy proposal
was higher in the positive framing condition than in the baseline where sup-
port was higher than in the negative framing condition.
Figure 5: Equivalence framing results
Note: Leftmost panel: The dependent variable runs from 1-5. Rightmost panel: The effects
of treatments are calculated as compared to the baseline condition. 95 % confidence inter-
vals. Reprint from article E (“Politicians & Bureaucrats”).
4.3.2. Issue framing effects
In addition to the test of equivalence framing effects, article E (“Politicians &
Bureaucrats”) includes a test of issue framing effects. In section 2.4.2, I argued
that policymakers should be expected to make different evaluations of infor-
mation depending on the information-related considerations that are empha-
sized in the presentation of the information. In article E, we test this expecta-
tion using an experiment where city councilors were asked, like in the test of
equivalence framing effects, to respond to a policy proposal and indicate to
what extent they agreed or disagreed with it. In the Danish, Italian, and Bel-
gian versions of the experiment, the policy proposal was about improving el-
derly care in the respondents’ municipality; in the USA, the proposal was
about improving public parks in the respondents’ city.
Like in the equivalence framing experiment, respondents were randomly
assigned to one of three experimental conditions in order to test for issue
framing effects. The proposal was identical across conditions in terms of con-
tent and financial consequences, but in a positive framing condition, we pre-
4.0414.003
3.621
3.456
3.2863.252
3.508
2.939
2.6392.731
2.604
2.171
22.5
33.5
4
Avera
ge
su
pp
ort
for
pro
posa
l
DK FL IT US
Positive framing
Control group
Negative framing
Negative frame
Positive frame
Negative frame
Positive frame
Negative frame
Positive frame
Negative frame
Positive frame
DK
FL
IT
US
-2 -1 0 1 2
55
sented the proposal with a politically appealing title in order to activate posi-
tive thoughts and considerations. In a negative framing condition, we re-
minded respondents of the obvious fact that in order to spend additional
money in one policy area, other policy areas would have to be given less prior-
ity, which might lead to protests from groups that do not benefit from the
changes. Finally, to test whether differences between the evaluations were pri-
marily driven by the positive or the negative framing, the policy proposal was
presented without being framed in a baseline condition. In all countries, the
positive framing led to significantly more favorable preferences than the base-
line, which led to more favorable preferences than the negative framing. The
effects were quite large (between approx. 0.5 and approx. 0.8 difference be-
tween the positive and negative framing conditions, on a five-point scale) and
thus, our results were highly supportive of the expectation of issue framing
effects.
4.3.3. Source cue effects
In section 2.4.3, I expected that policymakers will tend to evaluate policy-rel-
evant information in support of policies if they are told that the policies are
advocated for by groups with whom they are ideologically aligned (compared
to if they are told that the policies are advocated for by groups with whom they
are ideologically unaligned). As was also the case with the two kinds of framing
effects above, we test this expectation in article E (“Politicians & Bureaucrats”)
using an experiment where city councilors where asked to read a policy pro-
posal and indicate to what extent they agreed or disagreed with the proposal.
In all countries, the proposal concerned outsourcing of technical services to
private companies.
To test for source cue effects, we randomly assigned respondents to one of
three experimental conditions. In a baseline condition, respondents were pre-
sented with the pure content of the policy proposal; in two source cue condi-
tions, the proposal was presented along with information about a left- or right-
wing think tank advocating for policies like the one proposed in the experi-
ment. We expected right-leaning policymakers to be more supportive of the
proposal than left-leaning policymakers at the outset (in the baseline condi-
tion). However, we expected the respondents’ preferences to become biased
by the degree of ideological alignment between themselves and the think tanks
in the source cue conditions of the experiment. For instance, in the condition
where respondents learned that the proposal was supported by a left-wing
think tank, right-leaning policymakers were expected to form less favorable
preferences towards the policy, and left-leaning policymakers were expected
56
to form more favorable preferences, meaning that they would begin to support
policies that are at odds with their underlying ideological preferences.
However, our empirical results were weak and inconsistent with regard to
the existence of source cue effects. In Denmark, left-leaning policymakers did,
as expected, form more favorable preferences when they learned that the pol-
icy proposal was supported by a left-wing think tank but the expected effect of
the right-wing think tank cue did not materialize. In the USA, we found a
strong positive effect of the right-wing think tank cue among right-leaning pol-
icymakers (as expected) but contrary to our expectation, we found a similarly
strong positive effect of the left-wing think tank cue among the right-leaning
policymakers. In Italy, there was a positive effect of both the left- and the right-
wing think tank cues, but contrary to our expectations, the effects were not
contingent on the degree of ideological alignment between the policymakers
and the think tanks. Finally, in Belgium, the only source cue effect was a pos-
itive but only marginally significant effect (p < 0.10) of the left-wing think tank
cue, which was not contingent on the degree of ideological alignment between
the policymakers and the think tank. It is thus safe to say that we did not find
support for the expected source cue effects.
4.3.4. Order effects
In section 2.4.4, I expected that policymakers tend to evaluate sequences of
policy-relevant information differently depending on the order of favorable
and unfavorable information in the sequence. I discussed how primacy effects
or recency effects may be present in policymakers’ evaluation but did not have
a priori expectations with regard to the predominance of either kind of effect.
As was also the case with choice-driven motivated reasoning (section 4.1.3),
the dissertation does not include tests for order effects among policymakers,
and therefore I do not test this expectation directly. However, we provide
some related evidence in article F (“Order Effects”) where we test for order
effects in citizens’ perceptions and judgment based on public sector perfor-
mance information.
For the purpose of the article, we ran a survey experiment on a representa-
tive sample of British citizens. We asked participants to respond to a sequence
of six pieces of performance information about a school, each included on dif-
ferent pages in our survey. One piece of information was designed to generate
positive reactions, one was designed to generate negative reactions, and four
were designed to generate neutral reactions among our respondents. To test
for order effects, we randomly assigned the respondents to a “positive first” or
a “negative first” condition. The content and wording of the information were
identical across experimental conditions, but in the positive first condition,
57
the positive information was placed first and the negative information was
placed fifth in the sequence. Contrarily, in the negative first condition, the neg-
ative information was placed first and the positive information was placed
fifth in the sequence. We asked respondents, based on the sequence of infor-
mation, to evaluate the favorability of the school and to indicate their willing-
ness to use the school if they were to choose a school for their own child. Thus,
we were able to investigate 1) whether respondents made different judgments
depending on the order of the positive and negative information and 2) the
relative influence of the first piece and later pieces of information. We did find
some evidence of order effects in our results. Consistent with recency effects,
respondents in the negative first condition found the school slightly more ap-
pealing than respondents in the positive first group, and they were more will-
ing to use the school for their own child. However, only the effect on the will-
ingness to use the school was statistically significant.
As with the choice-driven motivated reasoning in section 4.1.3, some cau-
tion is needed in terms of generalizing article F’s results to policymakers.
However, the results do call for further attention to how information is pre-
sented as they demonstrate that evaluations of information can be affected
even without changing the wording of the information. I return to the gener-
alizability from studies of ordinary citizens to policymakers in Chapter 5.
59
Chapter 5. Concluding discussion
In the introduction, I described how in recent years, the idea of evidence-
based policymaking has gained momentum in democratic systems all over the
world. Central to this idea is the assumption that, by making factually in-
formed decisions based on policy-relevant evidence, policymakers should be
able to work systematically towards the attainment of their political goals.
However, as I further argued in the introduction, the process from evidence to
policy is not a mechanical one. In order for evidence to inform policymaking,
it must be interpreted by human beings and therefore, we cannot understand
the role of evidence in policymaking without understanding how policymakers
interpret policy-relevant information.
With this dissertation, I have contributed important insights of relevance
to this issue by asking whether psychological biases affect policymakers’ inter-
pretation of policy-relevant information, and whether contextual factors mod-
erate the impact of psychological biases on policymakers’ interpretation. As
discussed in Chapter 3, I am not the first to argue that insights about psycho-
logical biases are relevant in order to understand policymakers’ behavior, but
in existing literature, researchers have most often adopted insights from stud-
ies of (non-policymaker) citizens and assumed that if their reasoning is biased,
the same will be the case among policymakers. Following the arguments in
Chapter 3, I found reason to investigate the research question more directly,
and the dissertation therefore contains, to my knowledge, the most compre-
hensive investigation of psychological biases among elected policymakers, so
far. Four of the dissertation’s six articles report large-n experimental investi-
gations of biases in elected city councilors’ interpretation of policy-relevant
information.
With regard to the first part of the research question, results showed that
policymakers are indeed affected by psychological biases when interpreting
policy-relevant information. They are biased by factors related to the presen-
tation of information and by political attitudes and beliefs. As I argued
throughout the dissertation, these results call into question core assumptions
behind the idea that policy improvements will follow when policymakers get
access to policy-relevant evidence. If policymakers are biased in their under-
standing of information, they will probably be biased in their use of the infor-
mation as well, meaning that they will be hampered in their ability to use pol-
icy-relevant evidence systematically to inform decisions.
60
Furthermore, with regard to the second part of the research question, re-
sults showed that contextual factors do matter but in ways that were not ex-
pected. Contrary to the expectations in Chapter 2, policymakers’ interpreta-
tions became more biased by political attitudes when they were presented with
larger amounts of evidence and when they were asked to justify their interpre-
tations. This is interesting, as, as I discussed in the introduction, following the
prominence of evidence-based policymaking, a lot of effort has been devoted
to ensuring that policymakers have access to policy-relevant evidence, mean-
ing that policymakers do today have access to enormous amounts of infor-
mation. Furthermore, in modern democracies, a variety of institutions force
elected policymakers to continuously justify their claims (e.g. through com-
mittee proceedings, parliamentary debates and through interviews with criti-
cal journalists). The dissertation’s results suggest that, instead of leading to
more informed policies, these factors may make policymakers rely more on
attitudes and less on evidence in their decision-making.
5.1. Don’t policymakers behave like other human beings? Revisited In Chapter 3, I discussed how policymakers could be expected to differ from
other citizens in terms of personal characteristics and in terms of their role in
society, which may in turn affect their reasoning about policy-relevant infor-
mation. I argued that we could not take for granted that policymakers behave
like other citizens, and to cast light on psychological biases in policymaking, I
had to collect data among actual, elected policymakers. It is relevant to revisit
this discussion in light of the dissertation’s results. Although my research
question does not concern differences between policymakers and other citi-
zens, and there are few direct policymaker-citizen comparisons in the disser-
tation, the results do give some empirical basis for an assessment of similari-
ties and differences between policymakers’ and other citizens’ behavior. This
has important implications in terms of the extent to which insights from citi-
zen-based studies (e.g. articles D and F) can be generalized to policymakers.
Furthermore, it has important implications in terms of political-psychological
theory, as scholars so far have mainly treated psychological phenomena such
as biased reasoning as universal attributes of the human mind. The disserta-
tion’s results can help us understand the extent to which roles affect reasoning
and the extent to which psychological phenomena like biased reasoning
should be treated as universal.
The first thing to be noted in this regard is that, in terms of the tendency
to engage in biased reasoning, policymakers do actually seem to behave in
61
ways that are similar to the behavior of other citizens. In most tests, policy-
makers’ behavior was consistent with the theoretical expectations in Chapter
2, i.e. with patterns observed among ordinary citizens in other studies. Fur-
thermore, as mentioned in section 4.1.2, direct comparisons of the impact of
attitudes on policymakers’ and citizens’ interpretation of information revealed
no significant differences between the two groups’ tendencies to engage in po-
litically motivated reasoning. These results support the idea of psychological
biases as a general human phenomenon. Therefore, it does seem reasonable
to form at least expectations about policymakers’ behavior based on studies of
ordinary citizens (e.g., articles D and F).
However, based on the dissertation’s results, it is also clear that one should
be careful about generalizing citizens’ behavior to policymakers. Thus, the re-
sults in articles B and C did reveal systematic differences between policymak-
ers and other citizens in reactions to contextual factors. This was most clear in
article C’s investigation of the effects of justification requirements. As ex-
pected, asking citizens to justify their evaluations decreased the impact of at-
titudes but the same intervention increased the impact of attitudes among pol-
icymakers. Additional analyses in article C suggest that the difference between
policymakers’ and other citizens’ reaction to justification requirements did not
result from differences in personal characteristics (such as political expertise
and attitude importance, cf. sections 3.2.2 and 3.2.3). Thus, the debiasing ef-
fect among citizens was strongest among those who were most interested in
politics. Instead, I argue that there seems to be something in the policymaker
role, which makes the policymakers react differently to justification require-
ments. This suggestion is supported by additional analyses in article C show-
ing that the bias-strengthening effects of justification requirements were
driven by the behavior of experienced policymakers who had had a long time
to learn to behave as policymakers, whereas justification requirements did not
lead to stronger biases among recently elected policymakers.
By pointing to the importance of taking roles seriously, the dissertation’s
results are not only important to the literature on evidence-based policymak-
ing (and factually informed decision-making more broadly). They also con-
tribute to political-psychological theory and to the broader literature on the
behavior of political elites by highlighting that although policymakers are cer-
tainly human beings like everyone else and although it seems reasonable in
most cases to expect similarities between policymakers and other citizens,
there are situations where being a policymaker seems to affect behavior. Of
course, as noted above, the dissertation’s research question was not about dif-
ferences between policymakers and other citizens, and my ambition was not
to make systematic investigations of the impact of roles on peoples’ reasoning.
I have only scratched the surface of this important issue, and as I discuss in
62
section 5.3.2, more systematic investigations of the issue are clearly war-
ranted. However, following my results, scholars with an interest in policymak-
ers’ behavior (and the political behavior of political elites more generally) need
to consider the extent to which roles may affect behaviors of interest. It does
seem fruitful to continue to run studies on samples of actual policymakers or,
at a very minimum, attempt to identify groups of citizens who behave most
like policymakers, instead of uncritically generalizing from citizen-based find-
ings.
5.2. Methodological caveats While the studies in this dissertation certainly contribute valuable insights
about policymakers’ interpretation of policy-relevant information, there are
some important methodological limitations that should be considered.
First, readers might question the generalizability of the dissertation’s re-
sults because they are mainly based on answers from Danish local policymak-
ers. Just as I argued (and found) that policymakers may sometimes behave
differently than other citizens, we cannot take for granted that all policymak-
ers are identical in their behavior. For example, following the discussion above
about the importance of taking roles seriously, it is reasonable to argue that
the policymaker role will tend to entail different norms and expectations
across cultures, and Danish policymakers may behave differently than policy-
makers from other countries. In article E, we attempt to address this as di-
rectly as possible by conducting our studies in Denmark, Italy, Belgium, and
the USA, and overall, our results generalize across countries. However, even
though these countries represent quite different political, institutional, and
cultural settings, it is also clear that they are all western, educated, industrial-
ized, rich, and democratic (WEIRD) (Henrich, Heine, and Norenzayan 2010).
Furthermore, all the policymakers who were surveyed in the dissertation are
local policymakers. In Chapter 3, I argued that local policymakers were ideal
in terms of balancing the need for real policymaker respondents and the need
for large samples. However, we cannot take for granted that local policymak-
ers behave like e.g. policymakers at the national level since local policymakers
on average are probably less professional political actors than e.g. members of
national parliaments.19 This may affect how they reason about policy-relevant
information (cf. arguments in section 3.2.2), and some caution is therefore
19 For instance, De Paola and Scoppa (2011) have found that political competence is
positively related to political competition, which tends to be lower at the local than
at the national level.
63
needed in terms of generalizing the dissertation’s results to other tiers of gov-
ernment. Thus, future research might contribute important insights simply by
replicating this dissertation’s studies (directly or conceptually) in other coun-
tries and tiers of government.
Second, readers might rightfully question the ecological validity of the sur-
vey experiments that I have used to test my theoretical expectations (Blom-
Hansen, Morton, and Serritzlew 2015, 166). In Chapter 3, I argued that survey
experiments were useful in terms of testing my expectations as they allowed
me to observe large numbers of policymakers’ interpretations of comparable
pieces of information in controlled environments, which could be designed for
the explicit purpose of testing my expectations with a high level of internal
validity. However, this high level of control did not come without a cost. Alt-
hough I aimed at designing the experiments in a way respondents could relate
to (e.g. by presenting respondents with experimental material about policies
for which they were responsible in their councils), it was typically necessary to
place the respondents in a quite artificial information environment. The ex-
perimental material contained fictional information, which was further sim-
plified to control the factors of interest in each experiment. I argue that this
was helpful in terms of testing the specific expectations in the dissertation, but
I acknowledge that it limits the studies’ ability to inform us about how psycho-
logical biases affect policymakers’ use of information in their everyday, real-
world decision-making. Therefore, I encourage scholars to think carefully
about how to design experiments in ways that are closer to the reality of inter-
est and to “open up the toolbox” and approach questions with relevance to the
dissertation’s research question with greater methodological diversity.
5.3. What’s next? As I wrote in the beginning of this chapter, the short answer to the disserta-
tion’s research question is that policymakers are affected by psychological bi-
ases when they interpret policy-relevant information and that contextual fac-
tors do moderate the impact of psychological biases on policymakers’ inter-
pretation (although in unexpected ways). However, many important ques-
tions of relevance to the dissertation’s research question remain to be an-
swered in future research. In addition to the methodological caveats discussed
above, there are questions that have not been addressed or regarding which I
have only scratched the surface with my investigation. In the remainder of this
chapter, I will discuss important questions of relevance to the dissertation’s
research question that I hope to see investigated in future research. Section
5.3.1 concerns implications of the influence of factors related to the presenta-
tion of information on policymakers’ interpretation of the information, and
64
section 5.3.2 concerns the influence of attitudes and beliefs on policymakers’
interpretation of information.
5.3.1. May policymakers be manipulated in their preference formation?
Results in section 4.3 showed that even small changes in the presentation of
policy-relevant information can lead to systematic and quite substantial
changes in policymakers’ interpretation of, and preference formation based
on, the information. These results give rise to some fundamental democratic
questions that I have mentioned briefly in the summary report so far, but are
discussed at more length in article E (“Politicians & Bureaucrats”). Thus, as
we discuss in that article, policymakers often have to rely on bureaucrats to
decide what information is relevant in any given situation and to communicate
the information to them. Ideally, in doing so, the bureaucrats should support
the policymakers in pursuing their political goals. As Aberbach, Putnam, and
Rockman (1981) argue: “in a well-ordered polity, (…) politicians articulate so-
ciety’s dreams, and bureaucrats help bring them gingerly to earth” (ibid., 262).
However, scholars have long worried that because of their expertise and
unique position in the political system, bureaucrats are in a powerful position
to influence policymaking. Max Weber argued that “(t)he power position of a
fully developed bureaucracy is always overtowering. The ‘political master’
finds himself in the position of the dilettante who stands opposite the ‘expert’”
(Weber 1970 [1922], 232), and similar concerns have been expressed since
then (Gulick 1937; Niskanen 1971; Miller 2005). Thus, to use the words of
Aberbach, Putnam, and Rockman, policymakers may very well articulate
dreams for society but bureaucrats will be able to disregard these dreams if
they do not share them.
Following the results in section 4.3, there is reason to consider the possi-
bility that existing literature may not have fully grasped the power potential of
bureaucrats. Thus, as we argue in article E, if policymakers depend on bureau-
crats to decide what information is relevant and to communicate the infor-
mation to them, and if policymakers’ use of information depends on how it is
presented to them, then bureaucrats may not even have to disregard the poli-
cymakers’ dreams for society to pursue their own goals. Scholars have worried
that political elites “face few constraints to using frames to influence and ma-
nipulate citizens' opinions” (Druckman 2001, 1041), but the results in section
4.3 show that policymakers are not insulated from manipulation themselves.
65
By designing information strategically, bureaucrats may be able to use psy-
chological insights to manipulate policymakers’ preference formation and ma-
nipulate them to pursue the bureaucrats’ own dreams for society.20
Important questions related to this discussion remain unanswered and
should be addressed by future research. The results in section 4.3 show that
policymakers can be influenced through factors related to the presentation of
information, and survey answers in article E show that policymakers are con-
cerned about the undue influence of bureaucrats. However, future research
should investigate if (and when) psychological insights are used to manipulate
policymakers in practice, as the dissertation’s tests do not answer that ques-
tion. Furthermore, future research should attempt to uncover the conditions
under which policymakers are most susceptible to manipulation. A variety of
variables might matter in this regard, and for example, the influence of poli-
cymakers’ trust in the bureaucracy might be worthy of investigation
(Druckman 2001). I believe that continued investigation of these and related
questions will yield important insights.
5.3.2. Does the policymaker role make it impossible to reduce motivated reasoning?
Another set of important questions concern the impact of roles and role per-
ceptions on policymakers’ reasoning about policy-relevant information, and
the extent to which policymakers may under some conditions be less affected
by their attitudes and beliefs when interpreting policy-relevant information.
Results in the dissertation suggest that being a policymaker affects policy-
makers’ reasoning, but I encourage more systematic investigations of this
question. My ability to make causal claims about the effects of being a policy-
maker is weakened by the fact that my investigation was not designed for the
explicit purpose of uncovering such effects. Investigating effects of being a pol-
icymaker without compromising internal validity is not methodologically
straightforward as the independent variable of interest (the policymaker role)
cannot easily be randomized in experiments. However, one strategy could be
to randomize the extent to which the policymaker role is rendered salient in
experiments and investigate whether this affects policymakers’ behavior
(Cohn, Fehr, and Maréchal 2014). Alternatively, scholars might be able to in-
vestigate the impact of being a policymaker through regression discontinuity
20 Of course, other information providers may use these insights to influence policy-
makers’ preferences as well and thus, as noted in section 2.1, policymakers may draw
on a wide range of information sources in their decision-making. However, as we
show in article E, compared to other actors in the political system, bureaucrats are
in a privileged position, as they constitute a very central source of information.
66
designs, comparing the experimentally measured behavior of narrow winners
and losers of elections (Enemark et al. 2016).
In addition to uncovering the extent to which the policymaker role affects
policymakers’ behavior, it is also important that scholars attempt to uncover
what it is in the policymaker role that matters. In article C, I discuss how it is
a policymaker’s job to be partisan (Andeweg 1997), contrary to ordinary citi-
zens who have been found to find social value in appearing politically inde-
pendent (Klar and Krupnikov 2016). Policymakers are often judged, not based
on their ability to make factually correct claims but on their ability to appear
consistent in their political views (Tavits 2007; Sorek, Haglin, and Geva 2017).
In that sense, one may argue that policymakers are subject to other logics of
appropriateness (March and Olsen 2011) than citizens are, which may very
well affect their reasoning in relation to policy-relevant information. I believe
that in order to reach a deeper understanding of this question, it will be useful
to base future investigations on more sociological and qualitative approaches
than I have employed in this dissertation. For instance, through qualitative
investigations of policymakers’ perceptions regarding their professional role
and regarding how this role relates to their use of policy-relevant information,
scholars may be able to reach important insights about the causes of policy-
makers’ behavior. This may also be helpful in terms of identifying what needs
to happen in order for policymakers to develop a less closed-minded approach
to information.
When it comes to the possibility of reducing motivated reasoning in poli-
cymakers’ interpretation of policy-relevant information, it is worth noting that
the existing literature has found debiasing effects of incentives to make accu-
rate judgments based on political information (Bullock et al. 2015; Prior,
Sood, and Khanna 2015). The literature has focused on the effects of monetary
incentives on ordinary (non-policymaker) citizens’ judgments, but it may be
relevant, in future research, to consider the political incentives facing policy-
makers in politics today. It may for example be relevant to look into mass me-
dia coverage of politics and its effects on policymakers’ behavior. Coverage of
politics often resembles coverage of sports events (Groenendyk 2013, xi) in the
sense that the political process tends to be viewed as a battle with winners and
losers,21 and it may be viewed as a sign of weakness if policymakers give in to
evidence and good arguments and acknowledge that they have been wrong on
some issue. It therefore makes sense to view policymakers more as sports-
men/women than as problem solvers. It seems reasonable to argue (specula-
tively, I admit) that this may contribute to creating a political culture where
policymakers, in order to survive in the political game, have to maintain a
21 See e.g. Zurcher (2016), Bienkov (2017), Eddy (2017), and Samuel (2017).
67
closed mind when approaching policy-relevant information. I encourage
scholars to investigate this more systematically in the future, also in order to
assess the extent to which the mass media, through a more constructive ap-
proach to the coverage of politics (Haagerup 2017), may be key to creating a
more open-minded political culture.
In the pages above, I have brought up some perspectives out of many that
could have been discussed in light of the dissertation and its results. It is clear
that important issues remain open for future research to investigate. I hope
that the dissertation, in addition to contributing specific results and insights
about psychological biases in policymakers’ use of policy-relevant infor-
mation, will inspire others to join the continued search for insights of rele-
vance to the research question. A great deal of work remains to be carried out.
69
References
Aberbach, Joel D., Robert D. Putnam, and Bert A. Rockman. 1981. Bureaucrats and
Politicians in Western Democracies. Cambridge, MA.: Harvard University
Press.
Albæk, Erik, and Michael Andersen. 2013. “Var Der Balance i Tv-Dækningen?” In
KV09. Analyser Af Kommunalvalget 2009, edited by Jørgen Elklit and Ulrik
Kjær, 159–81. Odense: Syddansk Universitetsforlag.
Andeweg, Rudy B. 1997. “Role Specialisation or Role Switching? Dutch MPs
between Electorate and Executive.” The Journal of Legislative Studies 3 (1):
110–27. doi:10.1080/13572339708420502.
Andrews, Rhys, George A. Boyne, and Richard M. Walker. 2006. “Subjective and
Objective Measures of Organizational Performance: An Empirical Exploration.”
In Public Service Performance: Perspectives on Measurement and
Management, edited by George A. Boyne, Kenneth J. Meier, Laurence J.
O’Toole, and Richard M. Walker, 14–34. Cambridge: Cambridge University
Press. doi:DOI: 10.1017/CBO9780511488511.002.
Arceneaux, Kevin, and Ryan J. Vander Wielen. 2017. Taming Intuition: How
Reflection Minimizes Partisan Reasoning and Promotes Democratic
Accountability. Cambridge: Cambridge University Press.
Askim, Jostein. 2011. “Determinants of Performance Information Utilization in
Political Decision Making.” In Performance Information in the Public Sector:
How It Is Used, edited by Wouter Van Dooren and Steven Van de Walle, 125–
39. New York: Palgrave Macmillan.
Baekgaard, Martin, and Søren Serritzlew. 2016. “Interpreting Performance
Information: Motivated Reasoning or Unbiased Comprehension.” Public
Administration Review 76 (1): 73–82. doi:10.1111/puar.12406.
Bartels, Brandon L, and Chris W Bonneau. 2014. “Can Empirical Research Be
Relevant to the Policy Process? Understanding the Obstacles and Exploiting
the Opportunities.” In Making Law and Courts Research Relevant: The
Normative Implications of Empirical Research, edited by Brandon L Bartels
and Chris W Bonneau, 221–28. New York: Routledge.
BBC. 2000. “Blunkett Asks Academics for Help.” BBC News, 2 February 2000.
http://news.bbc.co.uk/2/hi/uk_news/education/628919.stm.
Behn, Robert D. 2003. “Why Measure Performance? Different Purposes Require
Different Measures.” Public Administration Review 63 (5): 586–606.
doi:doi.org/10.1111/1540-6210.00322.
Belardinelli, Paolo, Nicola Bellé, Mariafrancesca Sicilia, and Ileana Steccolini. 2018.
“Framing Effects under Different Uses of Performance Information: An
Experimental Study on Public Managers.” Public Administration Review, no.
forthcoming. doi:10.1111/puar.12969.
Bellé, Nicola, Paola Cantarelli, and Paolo Belardinelli. 2018. “Prospect Theory Goes
Public: Experimental Evidence on Cognitive Biases in Public Policy and
70
Management Decisions.” Public Administration Review, no. forthcoming.
doi:10.1111/puar.12960.
Bienkov, Adam. 2017. “The Battle for Number 10 Was a Clear Win for Jeremy
Corbyn.” Business Insider Nordic, 30 May 2017.
https://nordic.businessinsider.com/general-election-battle-for-number-10-
channel-4-sky-news-win-jeremy-corbyn-over-theresa-may-2017-
5?r=UK&IR=T.
Bisgaard, Martin. 2015. “Bias Will Find a Way: Economic Perceptions, Attributions
of Blame, and Partisan-Motivated Reasoning during Crisis.” The Journal of
Politics 77 (3): 849–60. doi:10.1086/681591.
Blom-Hansen, Jens, Rebecca Morton, and Søren Serritzlew. 2015. “Experiments in
Public Management Research.” International Public Management Journal 18
(2): 151–70. doi:10.1080/10967494.2015.1024904.
Bodenhausen, Galen V, Geoffrey P Kramer, and Karin Süsser. 1994. “Happiness
and Stereotypic Thinking in Social Judgment.” Journal of Personality and
Social Psychology 66 (4): 621–32.
Bolsen, Toby, James N Druckman, and Fay L Cook. 2014. “The Influence of
Partisan Motivated Reasoning on Public Opinion.” Political Behavior 36 (2):
235–62. doi:10.1007/s11109-013-9238-0.
Bond, Samuel D., Kurt A. Carlson, Margaret G. Meloy, J. Edward Russo, and Robin
J. Tanner. 2007. “Information Distortion in the Evaluation of a Single Option.”
Organizational Behavior and Human Decision Processes 102 (2): 240–54.
doi:10.1016/j.obhdp.2006.04.009.
Boyne, George A. 2002. “Concepts and Indicators of Local Authority Performance:
An Evaluation of the Statutory Frameworks in England and Wales.” Public
Money & Management 22 (2): 17–24. doi:10.1111/1467-9302.00303.
Bradner, Eric. 2017. “Conway: Trump White House Offered ‘alternative Facts’ on
Crowd Size.” CNN, 23 January 2017.
https://edition.cnn.com/2017/01/22/politics/kellyanne-conway-alternative-
facts/.
Brehm, Jack W. 1956. “Postdecision Changes in the Desirability of Alternatives.”
The Journal of Abnormal and Social Psychology 52 (3): 384–89.
doi:10.1037/h0041006.
Bullock, John G., Alan S. Gerber, Seth J. Hill, and Gregory A. Huber. 2015.
“Partisan Bias in Factual Beliefs about Politics.” Quarterly Journal of Political
Science 10 (May): 519–78. doi:10.1561/100.00014074.
Cabinet Office. 1999a. “Modernising Government.” London.
http://webarchive.nationalarchives.gov.uk/20131205101137/http://www.archi
ve.official-documents.co.uk/document/cm43/4310/4310.htm.
Cabinet Office. 1999b. “Professional Policy Making for the Twenty First Century.”
London. http://dera.ioe.ac.uk/6320/1/profpolicymaking.pdf.
Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes.
1960. The American Voter. New York: Wiley.
71
Canes-Wrone, Brandice, David W. Brady, and John F. Cogan. 2002. “Out of Step,
out of Office: Electoral Accountability and House Members’ Voting.” American
Political Science Review 96 (1): 127–40. doi:10.1017/S0003055402004276.
Carlson, Kurt A, Meg G Meloy, and J Edward Russo. 2006. “Leader-Driven
Primacy: Using Attribute Order to Affect Consumer Choice.” Journal of
Consumer Research 32 (4): 513–18. doi:10.1086/500481.
Chaiken, Shelly. 1980. “Heuristic Versus Systematic Information Processing and
the Use of Source Versus Message Cues in Persuasion.” Journal of Personality
and Social Psychology 39 (5): 752–66.
Chong, Dennis, and James N. Druckman. 2010. “Dynamic Public Opinion:
Communication Effects over Time.” American Political Science Review 104
(4): 663–80. doi:10.1017/S0003055410000493.
Chun, Young Han, and Hal G. Rainey. 2005. “Goal Ambiguity and Organizational
Performance in U.S. Federal Agencies.” Journal of Public Administration
Research and Theory 15 (4): 529–57. doi:10.1093/jopart/mui030.
Clarence, Emma. 2002. “Technocracy Reinvented: The New Evidence Based Policy
Movement.” Public Policy and Administration.
doi:10.1177/095207670201700301.
Cohen, Geoffrey L. 2003. “Party Over Policy: The Dominating Impact of Group
Influence on Political Beliefs.” Journal of Personality and Social Psychology
85 (5): 808–22. doi:10.1037/0022-3514.85.5.808.
Cohn, Alain, Ernst Fehr, and Michel André Maréchal. 2014. “Business Culture and
Dishonesty in the Banking Industry.” Nature 516 (729): 86–89.
doi:10.1038/nature13977.
Cooper, Joel. 2007. Cognitive Dissonance: Fifty Years of a Classic Theory. SAGE
Publications Ltd.
Davies, Huw, Sandra Nutley, and Peter Smith. 2000. “Introducing Evidence-Based
Policy and Practice in Public Services.” In What Works? Evidence-Based Policy
and Practice in Public Services, 1–11. Bristol: The Policy Press.
Davis, J. H. 1984. “Order in the Courtroom.” In Perspectives in Psychology and
Law., edited by D. J. Miller, D. G. Blackman, and A. J. Chapman. New York:
Wiley.
Dooren, Wouter Van. 2011. “Nothing New Under the Sun? Change and Continuity
in the Twentieth-Century Performance Movements.” In Performance
Information in the Public Sector: How It Is Used, edited by Wouter Van
Dooren and Steven Van de Walle, 15–27. New York: Palgrave Macmillan.
Dooren, Wouter Van, Geert Bouckaert, and John Halligan. 2010. Performance
Management in the Public Sector. Paperback. Abingdon, Oxton: Routledge.
Doubleday, Robert, and James Wilsdon. 2012. “Science Policy: Beyond the Great
and Good.” Nature 485: 301–2. doi:doi:10.1038/485301a.
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper
Collins.
72
Druckman, J N, and K R Nelson. 2003. “Framing and Deliberation: How Citizen
Conversation Limits Elite Influence.” American Journal of Political Science 47
(4): 729–45. doi:10.1111/1540-5907.00051.
Druckman, James N. 2001. “On the Limits of Framing Effects: Who Can Frame?”
The Journal of Politics 63 (04): 1041–66. doi:10.1111/0022-3816.00100.
Druckman, James N., and Arthur Lupia. 2012. “Experimenting with Politics.”
Science. doi:10.1126/science.1207808.
Druckman, James N. 2001. “The Implications of Framing Effects for Citizen
Competence.” Political Behavior 23 (3): 225–56.
doi:10.1023/A:1015006907312.
Druckman, James N. 2004. “Political Preference Formation: Competition,
Deliberation, and the (Ir)Relevance of Framing Effects.” The American
Political Science Review 98 (4): 671–86.
doi:doi:10.1017/S0003055404041413.
Druckman, James N. 2012. “The Politics of Motivation.” Critical Review: A
Journal of Politics and Society 24 (2): 199–216.
doi:10.1080/08913811.2012.711022.
Eagly, Alice H, and Shelly Chaiken. 1993. The Psychology of Attitudes. Fort Worth,
TX: Harcourt, Brace, & Janovich.
Eddy, Melissa. 2017. “Angela Merkel Declared Winner of German Election Debate.”
The New York Times, 3 September 2017.
https://www.nytimes.com/2017/09/03/world/europe/angela-merkel-martin-
schulz-debate-german-elections.html.
Elmelund-Præstekær, Christian, and Morten Skovsgaard. 2017. “Hvor Får
Vælgerne Deres Viden Om Valget Fra?” In KV13. Analyser Af Kommunalvalget
2013, edited by Jørgen Elklit, Christian Elmelund-Præstekær, and Ulrik Kjær,
43–59. Odense: Syddansk Universitetsforlag.
Enemark, Daniel, Clark C. Gibson, Mathew D. McCubbins, and Brigitte Seim. 2016.
“Effect of Holding Office on the Behavior of Politicians.” Proceedings of the
National Academy of Sciences 113 (48): 13690–95.
http://www.pnas.org/lookup/doi/10.1073/pnas.1511501113.
Eppler, Martin J., and Jeanne Mengis. 2004. “The Concept of Information
Overload: A Review of Literature from Organization Science, Accounting,
Marketing, MIS, and Related Disciplines.” The Information Society 20 (5):
325–44. doi:10.1080/01972240490507974.
Festinger, Leon. 1957. A Theory of Cognitive Dissonance. Stanford University
Press.
Gagné, Marylène, and Edward L. Deci. 2005. “Self-Determination Theory and
Work Motivation.” Journal of Organizational Behavior 26 (4): 331–62.
Geen, Russel G. 1991. “Social Motivation.” Annual Review of Psychology 42.
Gerrish, Ed. 2016. “The Impact of Performance Management on Performance in
Public Organizations: A Meta-Analysis.” Public Administration Review 76 (1):
48–66. doi:10.1111/puar.12433.
73
Gilens, Martin, and Naomi Murakawa. 2002. “Elite Cues and Political Decision
Making.” In Research in Micropolitics: Political Decision Making,
Deliberation and Participation, edited by Michael X. Delli Carpini, Leonie
Huddy, and Robert Y. Shapiro, 15–49. Bingley: Emerald.
Gollwitzer, Peter M. 1990. “Action Phases and Mind-Sets.” In Handbook of
Motivation and Cognition: Foundations of Social Behavior, Volume 2, edited
by E. Tory Higgins and Richard M. Sorrentino, 53–92. New York: The Guilford
Press.
Goren, Paul, Christopher M. Federico, and Miki Caul Kittilson. 2009. “Source Cues,
Partisan Identities, and Political Value Expression.” American Journal of
Political Science 53 (4): 805–820. doi:10.1111/j.1540-5907.2009.00402.x.
Grimmelikhuijsen, Stephan, Sebastian Jilke, Asmus Leth Olsen, and Lars
Tummers. 2017. “Behavioral Public Administration: Combining Insights from
Public Administration and Psychology.” Public Administration Review 77 (1):
45–56. doi:10.1111/puar.12609.
Groenendyk, Eric W. 2013. Competing Motives in the Partisan Mind: How
Loyalty and Responsiveness Shape Party Identification and Democracy. New
York: Oxford University Press.
Gulick, Luther. 1937. “Notes on the Theory of Organization.” In Papers on the
Science of Administration, edited by Luther Gulick and L. Urwick. New York:
Institute of Public Administration.
Haagerup, Ulrik. 2017. Constructive News: How to Save the Media and
Democracy with Journalism of Tomorrow. Aarhus: Aarhus University Press.
Hallsworth, Michael, Mark Egan, Jill Rutter, and Julian McCrae. 2018.
“Behavioural Government: Using Behavioural Science to Improve How
Governments Make Decisions.” London.
https://www.behaviouralinsights.co.uk/publications/behavioural-
government/.
Hansen, Kasper Møller. 2017. “Partiers Og Kandidaters Forskellige
Valgkampagner.” In KV13. Analyser Af Kommunalvalget 2013, edited by
Jørgen Elklit, Christian Elmelund-Præstekær, and Ulrik Kjær, 61–77. Odense:
Syddansk Universitetsforlag.
Hansen, Kasper Møller. 2018. “Valgdeltagelsen Ved Kommunal- Og Regionsvalget
2017.” CVAP WP 1/2018. CVAP Working Paper Series. Copenhagen.
Hastorf, Albert H, and Hadley Cantril. 1954. “They Saw a Game: A Case Study.” The
Journal of Abnormal and Social Psychology 49 (1): 129–34.
doi:10.1037/h0057880.
Hatry, Harry P. 2006. Performance Measurement: Getting Results. 2nd ed.
Washington D.C.: The Urban Institute.
Heinrich, Carolyn J. 2007. “Evidence-Based Policy and Performance
Management.” The American Review of Public Administration 37 (3): 255–77.
doi:10.1177/0275074007301957.
Henrich, Joseph, Steven J Heine, and Ara Norenzayan. 2010. “The Weirdest People
in the World?” Behavioral and Brain Sciences 33: 61–135.
74
Hogarth, Robin M, and Hillel J Einhorn. 1992. “Order Effects in Belief Updating :
The Belief-Adjustment Model.” Cognitive Psychology 24: 1–55.
Holm, Jakob Majlund. 2017. “Double Standards? How Historical and Political
Aspiration Levels Guide Managerial Performance Information Use.” Public
Administration 95 (4): 1026–40. doi:10.1111/padm.12379.
Holm, Jakob Majlund. 2018. “Successful Problem Solvers? Managerial
Performance Information Use to Improve Low Organizational Performance.”
Journal of Public Administration Research and Theory, no. May: 1–18.
doi:10.1093/jopart/muy017.
Huddy, Leonie, David O Sears, and Jack S Levy. 2013. Oxford Handbook of
Political Psychology. Edited by Leonie Huddy, David O Sears, and Jack S Levy.
The Oxford Handbook of Political Psychology. 2nd ed. New York: Oxford
University Press.
Hvidman, Ulrik, and Simon Calmar Andersen. 2014. “Impact of Performance
Management in Public and Private Organizations.” Journal of Public
Administration Research and Theory 24 (1): 35–58.
doi:10.1093/jopart/mut019.
James, Oliver, Sebastian R. Jilke, and Gregg G Van Ryzin, eds. 2017. Experiments
in Public Management Research: Challenges and ContributionsJames, Oliver
Jilke, Sebastian R. Van Ryzin, Gregg G. Cambridge: Cambridge University
Press.
James, Oliver, and Gregg G Van Ryzin. 2017. “Motivated Reasoning about Public
Performance: An Experimental Study of How Citizens Judge Obamacare.”
Journal of Public Administration Research and Theory 27 (1): 197–209.
doi:10.1093/jopart/muw049.
Jerit, Jennifer, and Jason Barabas. 2012. “Partisan Perceptual Bias and the
Information Environment.” The Journal of Politics 74 (3): 672–84.
doi:10.1017/S0022381612000187.
Jervis, Robert. 1976. Perception and Misperception in International Politics.
Princeton: Princeton University Press.
Kahan, Dan M., David Hoffman, Danieli Evans, Neal Devins, Eugene Lucci, and
Katherine Cheng. 2016. “‘Ideology’ or ‘Situation Sense’? An Experimental
Investigation of Motivated Reasoning and Professional Judgment.” University
of Pennsylvania Law Review 164: 349–439.
Kahan, Dan M. 2016a. “The Politically Motivated Reasoning Paradigm, Part 1:
What Politically Motivated Reasoning Is and How to Measure It.” Emerging
Trends in the Social and Behavioral Sciences: An Interdisciplinary,
Searchable, and Linkable Resource, 1–16.
Kahan, Dan M. 2016b. “The Politically Motivated Reasoning Paradigm, Part 2 :
Unanswered Questions.” Emerging Trends in the Social and Behavioral
Sciences: An Interdisciplinary, Searchable, and Linkable Resource, 1–15.
Kahan, Dan M, Ellen Peters, Erica Cantrell Dawson, and Paul Slovic. 2017.
“Motivated Numeracy and Enlightened Self-Government.” Behavioural Public
Policy 1 (1): 54–86. doi:doi.org/10.1017/bpp.2016.2.
75
Kaplan, Robert S., and David P. Norton. 1992. “The Balanced Scorecard –
Measures That Drive Performance.” Harvard Business Review 70 (1): 71–91.
Kenny, Caroline. 2018. “Rudy Giuliani Says ‘Truth Isn’t Truth.’” CNN, 19 August
2018. https://edition.cnn.com/2018/08/19/politics/rudy-giuliani-truth-isnt-
truth/index.html.
Klar, Samara, and Yanna Krupnikov. 2016. Independent Politics: How American
Disdain for Parties Leads to Political Inaction. New York: Cambridge
University Press.
Krosnick, Jon A. 1990. “Government Policy and Citizen Passion – A Study of Issue
Publics in Contemporary America.” Political Behavior 12 (1): 59–92.
Kruglanski, Arie W, and Donna M Webster. 1996. “Motivated Closing of the Mind :
‘Seizing’ and ‘Freezing.’” Psychological Review 103 (2): 263–83.
doi:10.1037//0033-295X.103.2.263.
Kunda, Ziva. 1990. “The Case for Motivated Reasoning.” Psychological Bulletin 108
(3): 480–98. doi:http://dx.doi.org/10.1037/0033-2909.108.3.480.
Lake, David A. 2011. “Two Cheers for Bargaining Theory. Assessing Rationalist
Explanations of the Iraq War.” International Security 35 (3): 7–52.
Leary, Mark R, and Robin M Kowalski. 1990. “Impression Management: A
Literature Review and Two-Component Model.” Psychological Bulletin 107 (1):
34–47.
Leeper, Thomas J., and Rune Slothuus. 2014. “Political Parties, Motivated
Reasoning, and Public Opinion Formation.” Political Psychology 35 (SUPPL.1):
129–56. doi:10.1111/pops.12164.
Lerner, Jennifer S, Julie H Goldberg, and Phillip E Tetlock. 1998. “Sober Second
Thought: The Effects of Accountability, Anger, and Authoritarianism on
Attributions of Responsibility.” Personality and Social Psychology Bulletin 24
(6): 563–74.
Lerner, Jennifer S, and Philip E Tetlock. 1999. “Accounting for the Effects of
Accountability.” Psychological Bulletin 125 (2): 255–75.
Levin, Irwin P., Sandra L. Schneider, and Gary J. Gaeth. 1998. “All Frames Are Not
Created Equal: A Typology and Critical Analysis of Framing Effects.”
Organizational Behavior and Human Decision Processes 76 (2). Academic
Press: 149–88. doi:10.1006/obhd.1998.2804.
Levy, Jack S. 2013. “Psychology and Foreign Policy Decision-Making.” In The
Oxford Handbook of Political Psychology, edited by Leonie Huddy, David O
Sears, and Jack S Levy, 2nd ed., 301–33. New York: Oxford University Press.
Linde, Jona, and Barbara Vis. 2017. “Do Politicians Take Risks Like the Rest of Us?
An Experimental Test of Prospect Theory Under MPs.” Political Psychology 38
(1): 101–17.
Lodge, Milton, and Charles S Taber. 2000. “Three Steps toward a Theory of
Motivated Political Reasoning.” In Elements of Reason, edited by Arthur Lupia,
Mathew D McCubbins, and Samuel L Popkin, 183–213. Cambridge: Cambridge
University Press.
76
Lord, Charles G., Lee Ross, and Mark R. Lepper. 1979. “Biaised Assimilation and
Attitude Polarization: The Effects of Prior Theories on Subsequently
Considered Evidence.” Journal of Personality and Social Psychology 37 (11):
2098–2109.
Lowry, R.C., J.E. Alt, and K.E. Ferree. 1998. “Fiscal Policy Outcomes and Electoral
Accountability in American States.” American Political Science Review 92 (4):
759–774. doi:10.2307/2586302.
Lupia, Arthur, Mathew D. McCubbins, and Samuel L. Popkin. 2000. Elements of
Reason: Cognition, Choice, and the Bounds of Rationality. Edited by Arthur
Lupia, Mathew D. McCubbins, and Samuel L. Popkin. Cambridge: Cambridge
University Press.
March, James G., and Johan P. Olsen. 2011. “The Logic of Appropriateness.” In The
Oxford Handbook of Political Science, edited by Robert E. Goodin.
doi:10.1093/oxfordhb/9780199604456.013.0024.
Maslow, Abraham Harold. 1943. “A Theory of Human Motivation.” Psychological
Review 50 (4): 370–96.
Maynard, Rebecca A. 2006. “Evidence-Based Decision Making: What Will It Take
for the Decision Makers to Care?” Journal of Policy Analysis and Management
25 (2): 249–65. doi:10.1002/pam.20169.
Meier, Kenneth J., Nathan Favero, and Ling Zhu. 2015. “Performance Gaps and
Managerial Decisions: A Bayesian Decision Theory of Managerial Action.”
Journal of Public Administration Research and Theory 25 (4): 1221–46.
doi:10.1093/jopart/muu054.
Meier, Kenneth J, and Laurence J O’Toole. 2013. “I Think (I Am Doing Well),
Therefore I Am: Assessing the Validity of Administrators’ Self-Assessments of
Performance.” International Public Management Journal 16 (1): 1–27.
Miller, Gary J. 2005. “The Political Evolution of Principal-Agent Models.” Annual
Review of Political Science 8 (1): 203–25.
Miller, Paul M., and N. S. Fagley. 1991. “The Effects of Framing, Problem
Variations, and Providing Rationale on Choice.” Personality and Social
Psychology Bulletin 17 (5): 517–22.
Moynihan, Donald P. 2008. The Dynamics of Performance Management:
Constructing Information and Reform. Washington D.C.: Georgetown
University Press.
Nelson, Thomas E, Rosalee A Clawson, and Zoe M Oxley. 1997. “Media Framing of
a Civil Liberties Conflict and Its Effect on Tolerance.” American Political
Science Review 91 (3): 567–83.
Nelson, Thomas E, Zoe M Oxley, and Rosalee A Clawson. 1997. “Toward a
Psychology of Framing Effects.” Political Behavior 19 (3): 221–46.
Nickerson, Raymond. 1998. “Confirmation Bias: A Unique Phenomenon in Many
Guises.” Review of General Psychology 2 (2): 175–220.
Nielsen, Poul A., and Donald P. Moynihan. 2017. “How Do Politicians Attribute
Bureaucratic Responsibility for Performance? Negativity Bias and Interest
77
Group Advocacy.” Journal of Public Administration Research and Theory 27
(2): 269–83. doi:10.1093/jopart/muw060.
Nielsen, Poul Aaes. 2014. “Learning from Performance Feedback: Performance
Information, Aspiration Levels, and Managerial Priorities.” Public
Administration 92 (1): 142–60. doi:10.1111/padm.12050.
Niskanen, William A. 1971. Bureaucracy and Representative Government.
Chicago: Aldine-Atherton.
Nutley, Sandra, and Jeff Webb. 2000. “Evidence and the Policy Process.” In What
Works? Evidence-Based Policy and Practice in Public Services, edited by Huw
T.O. Davies, Sandra M. Nutley, and Peter C. Smith, 13–41. Bristol: The Policy
Press.
OECD. 2017. “Conference Summary.” In OECD Conference: Governing Better
through Evidence-Informed Policy Making. Paris.
Olsen, Asmus Leth. 2015. “Citizen (Dis)Satisfaction: An Experimental Equivalence
Framing Study.” Public Administration Review 75 (3): 469–78.
doi:10.1111/puar.12337.Citizen.
Olsen, Asmus Leth. 2017. “Compared to What? How Social and Historical
Reference Points Affect Citizens’ Performance Evaluations.” Journal of Public
Administration Research and Theory 27 (4): 562–80.
doi:10.1093/jopart/mux023.
Oxford Dictionaries. 2016. “Word of the Year 2016 Is...”
https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016.
Paola, Maria De, and Vincenzo Scoppa. 2011. “Political Competition and Politician
Quality: Evidence from Italian Municipalities.” Public Choice 148 (3–4): 547–
59.
Petersen, Niels B. G., Trine V. Laumann, and Morten Jakobsen. 2018. “Acceptance
or Disapproval : Performance Information in the Eyes of Public Frontline
Employees.” Journal of Public Administration Research and Theory.
doi:10.1093/jopart/muy035.
Prior, Markus, Gaurav Sood, and Kabir Khanna. 2015. “You Cannot Be Serious: The
Impact of Accuracy Incentives on Partisan Bias in Reports of Economic
Perceptions.” Quarterly Journal of Political Science 10 (4): 489–518.
doi:10.1561/100.00014127.
Redlawsk, David P, Andrew J W Civettini, Karen M Emmerson, Source Political
Psychology, No August, P Redlawsk, and Andrew J W Civettini. 2010. “The
Affective Tipping Point : Do Motivated Reasoners Ever " Get It "?” Political
Psychology 31 (4): 563–93.
Rogers, Steven. 2017. “Electoral Accountability for State Legislative Roll Calls and
Ideological Representation.” American Political Science Review 111 (3): 555–
71. doi:10.1017/S0003055417000156.
Salge, Torsten O. 2011. “A Behavioral Model of Innovative Search: Evidence from
Public Hospital Services.” Journal of Public Administration Research and
Theory 21 (1): 181–210. doi:10.1093/jopart/muq017.
78
Samuel, Henry. 2017. “Emmanuel Macron Promises ‘hand-to-Hand Combat’ with
Marine Le Pen in Key French Presidential TV Debate.” The Telegraph, 3 May
2017. https://www.telegraph.co.uk/news/2017/05/03/emmanuel-macron-
promises-hand-to-hand-combat-marine-le-pen-key/.
Sanderson, Ian. 2002. “Evaluation, Policy Learning and Evidence-Based Policy
Making.” Public Administration 80 (1): 1–22. doi:10.1111/1467-9299.00292.
Schick, Allen G, Lawrence A Gordon, and Susan Haka. 1990. “Information
Overload : A Temporal Approach.” Accounting, Organizations and Society 15
(3): 199–220.
Sheffer, Lior, Peter Loewen, Stuart Soroka, Stefaan Walgrave, and Tamir Sheafer.
2017. “Non-Representative Representatives: An Experimental Study of the
Decision Making Traits of Elected Politicians.” American Political Science
Review. doi:10.1017/s0003055417000569.
Shultz, Thomas R., Eléne Léveillé, and Mark R. Lepper. 1999. “Free Choice and
Cognitive Dissonance Revisited: Choosing ‘Lesser Evils’ Versus ‘Greater
Goods.’” Personality and Social Psychology Bulletin 25 (1): 40–48.
doi:https://doi.org/10.1177/0146167299025001004.
Slothuus, Rune, and Claes H de Vreese. 2010. “Political Parties, Motivated
Reasoning, and Issue Framing Effects.” Journal of Politics 72 (3): 630–45.
Sniderman, Paul M, Richard A Brody, and Philip E Tetlock. 1991. Reasoning and
Choice: Explorations in Political Psychology. Cambridge: Cambridge
University Press.
Song, Miyeon, and Kenneth J Meier. 2018. “Citizen Satisfaction and the
Kaleidoscope of Government Performance: How Multiple Stakeholders See
Government Performance.” Journal of Public Administration Research and
Theory, no. forthcoming. doi:10.1093/jopart/muy006.
Sorek, Ayala Yarkoney, Kathryn Haglin, and Nehemia Geva. 2017. “In Capable
Hands: An Experimental Study of the Effects of Competence and Consistency
on Leadership Approval.” Political Behavior. Springer US, 1–21.
doi:10.1007/s11109-017-9417-5.
Stein, Janice Gross. 2013. “Threat Perception in International Relations.” In The
Oxford Handbook of Political Psychology, edited by Leonie Huddy, David O
Sears, and Jack S Levy, 2nd ed., 364–94. New York: Oxford University Press.
Taber, Charles S., Damon Cann, and Simona Kucsova. 2009. “The Motivated
Processing of Political Arguments.” Political Behavior 31 (2): 137–55.
Taber, Charles S., and Milton Lodge. 2016. “The Illusion of Choice in Democratic
Politics: The Unconscious Impact of Motivated Political Reasoning.” Political
Psychology 37: 61–85. doi:10.1111/pops.12321.
Taber, Charles S, and Milton Lodge. 2006. “Motivated Skepticism in the Evaluation
of Political Beliefs.” American Journal of Political Science 50 (3): 755–69.
doi:https://doi.org/10.1111/j.1540-5907.2006.00214.x.
Tavits, Margit. 2007. “Principle vs. Pragmatism: Policy Shifts and Political
Competition.” American Journal of Political Science 51 (1): 151–65.
doi:10.1111/j.1540-5907.2007.00243.x.
79
Tetlock, Philip E. 2017. Expert Political Judgment: How Good Is It? How Can We
Know? 2nd ed. Princeton: Princeton University Press.
Tetlock, Philip E. 1983. “Accountability and the Perseverance of First Impressions.”
Journal of Personality and Social Psychology Social Psychology Quarterly 46
(4): 285–92.
Tetlock, Philip E. 1985. “Accountability: A Social Check on the Fundamental
Attribution Error.” Social Psychology Quarterly 48 (3): 227–36.
Tetlock, Philip E, and Jae Il Kim. 1987. “Accountability and Judgment Processes in
a Personality Prediction Task.” Journal of Personality and Social Psychology
52 (4): 700–709.
Tilley, James, and Sara B Hobolt. 2011. “Is the Government to Blame? An
Experimental Test of How Partisanship Shapes Perceptions of Performance
and Responsibility.” Journal of Politics 73 (2): 316–30.
doi:10.1017/S0022381611000168.
Tversky, Amos, and Daniel Kahneman. 1981. “The Framing of Decisions and the
Psychology of Choice.” Science 211 (January): 453–58.
doi:10.1126/science.7455683.
Valentino, Nicholas A., and Yioryos Nardis. 2013. “Political Communication: Form
and Consequence of the Information Environment.” In The Oxford Handbook
of Political Psychology, edited by Leonie Huddy, David O Sears, and Jack S
Levy, 2nd ed., 559–90. New York: Oxford University Press.
Walgrave, Stefaan, and Yves Dejaeghere. 2017. “Surviving Information Overload:
How Elite Politicians Select Information.” Governance 30 (2): 229–44.
Walle, Steven Van de, and Wouter Van Dooren. 2011. “Introduction: Using Public
Sector Performance Information.” In Performance Information in the Public
Sector: How It Is Used, edited by Wouter Van Dooren and Steven Van de
Walle, 1–12. Palgrave Macmillan.
Weber, Max. 1970. Bureaucracy. Edited by H. H. Gerth and C. Wright Mills. From
Max Weber: Essays in Sociology, Translated, Edited and with an Introduction
by H. H. Gerth and C. Wright Mills (1970 Edition, First Published in 1948).
London: Routledge & Kegan Paul Ldt.
Webster, Donna M, Linda Richter, and Arie W Kruglanski. 1996. “On Leaping to
Conclusions When Feeling Tired: Mental Fatigue Effects on Impressional
Primacy.” Journal of Experimental Social Psychology 32: 181–95.
Wildavsky, Aaron. 1966. “The Political Economy of Efficiency: Cost-Benefit
Analysis, Systems Analysis, and Program Budgeting.” Public Administration
Review 26 (4): 292–310.
Willemsen, Martijn C., and Eric J. Johnson. 2011. “Visiting the Decision Factory:
Observing Cognition with MouselabWEB and Other Information Acquisition
Methods.” In A Handbook of Process Tracing Methods for Decision Research:
A Critical View and User’s Guide, edited by Michael Schulte-Mecklenbeck,
Anton Kühnberger, and Rob Ranyard, 21–42. New York: Psychology Press.
80
Wilsdon, James, Kristiann Allen, and Katsia Paulavets. 2014. “Science Advice to
Governments. A Briefing Paper for the Auckland Conference, 28–29 August
2014.” In . Auckland.
Zaller, John R. 1992. The Nature and Origins of Mass Opinion. New York:
Cambridge University Press.
Zurcher, Anthony. 2016. “Trump v Clinton: Who Won the Debate?” BBC News, 10
October 2016. https://www.bbc.co.uk/news/election-us-2016-37604731.
81
English summary
In recent years, the idea that information is key to good policymaking has
gained momentum. Evidence-based policymaking has become a buzzword in
democratic systems all over the world, reflecting a widespread belief that pol-
icymakers will make better decisions if provided with factual information, e.g.
policy-relevant scientific evidence and systematic policy evaluations. As a re-
sult, policymakers have never had access to more policy-relevant information
than they do today. However, while a great deal of focus has been devoted to
ensuring that policymakers have access to policy-relevant information, much
less focus has been devoted to understanding how policymakers interpret the
information they encounter. This is unfortunate as even the hardest facts have
to be interpreted by human beings in order to inform decision-making. This
dissertation casts light on this issue by asking whether psychological biases
affect policymakers’ interpretation of policy-relevant information and
whether contextual factors moderate the impact of psychological biases on
policymakers’ interpretations.
A comprehensive experimental investigation, involving surveys of several
thousand elected policymakers, shows that policymakers are indeed affected
by psychological biases when interpreting policy-relevant information. Exper-
iments show that policymakers are biased by prior attitudes and beliefs and
by factors related to the presentation of information. Furthermore, contextual
factors do matter, but in ways that were not expected. Thus, contrary to theo-
retical expectations, policymakers’ interpretation becomes more affected by
political attitudes when the policymakers are presented with larger amounts
of evidence in support of a given conclusion and when they are asked to justify
their interpretation of the information. Interestingly, this behavior differs
from the behavior of representative samples of ordinary (non-policymaker)
citizens who were recruited to participate in identical experiments. Results
suggest that the policymaker role affects policymakers’ behavior, and there-
fore, the dissertation calls for scholars with an interest in policymakers’ be-
havior (and the behavior of political elites more broadly) to take roles more
seriously than they have done so far. Scholars should attempt to run studies
on samples of actual policymakers or, at a minimum, attempt to identify
groups of citizens who behave most like policymakers, instead of generalizing
uncritically from citizen-based findings.
The dissertation’s results question core assumptions behind the idea that
policy improvements will follow when policymakers get access to policy-rele-
vant information. When policymakers are biased in their interpretation of in-
82
formation, they are also likely to be biased in their use of the information. Fu-
ture research should continue to investigate the impact of the policymaker role
on the behavior of policymakers (and the impact of roles on political behavior
more broadly) as well as the impact of contextual factors on policymakers’ be-
havior.
83
Dansk resumé
Evidensbaseret politik har udviklet sig til at være et modeord i demokratiske
systemer i det meste af verden og i de seneste år har der således været en sti-
gende tiltro til information som nøglen til god politisk beslutningstagning.
Men til trods for, at politikere aldrig har haft adgang til mere information end
de har i dag, er vores viden fortsat begrænset når det kommer til politikeres
fortolkning af den information, de præsenteres for som grundlag for deres be-
slutninger. Dette er uhensigtsmæssigt, da måden hvorpå information fortol-
kes må forventes at påvirke, hvordan informationen omsættes til beslutninger.
Denne afhandling sætter derfor fokus på politikeres fortolkning af politisk re-
levant information ved at spørge, om politikeres fortolkning er påvirket af psy-
kologiske biases, og om kontekstfaktorer påvirker styrken af disse biases.
En omfattende eksperimentel undersøgelse af flere tusinde folkevalgte po-
litikere viser som forventet, at politikere er biased, både af forudgående poli-
tiske holdninger og overbevisninger og af hvordan information er præsente-
ret. Kontekstfaktorer påvirker politikeres tendens til at være biased i deres
fortolkning men på måder, der overrasker i lyset af afhandlingens teoretiske
forventninger. I strid med afhandlingens teoretiske forventninger får politiske
holdninger således større indflydelse på politikeres fortolkning, når politi-
kerne præsenteres for større mængder af information, selvom informationen
entydigt peger i retning af én bestemt konklusion, og når de forventer at blive
bedt om at retfærdiggøre deres fortolkning. Repræsentative stikprøver af den
danske befolkning opfører sig mere i overensstemmelse med afhandlingens
teoretiske forventninger og resultater tyder således på, at politikerrollen på-
virker politikernes adfærd. Dette er en vigtig indsigt for forskning i politikeres
adfærd (og politiske eliters politiske adfærd i øvrigt), da det viser, at roller bør
tages mere seriøst end det hidtil er gjort. Forskere bør ikke ukritisk generali-
sere resultater fra borger-baserede undersøgelser til politikere. De bør i stedet
gennemføre undersøgelser på stikprøver af folkevalgte politikere eller, i det
mindste, forsøge at identificere grupper af borgere, der opfører sig mest som
politikere.
Afhandlingens resultater udfordrer grundlæggende antagelser bag idéen
on, at politikere vil træffe bedre beslutninger, hvis de får adgang til beslut-
ningsrelevant information. Hvis politikeres fortolkning af information er
skævvredet af psykologiske biases, er det således sandsynligt, at også deres
anvendelse af informationen vil være biased. Fremtidig forskning bør fort-
sætte undersøgelsen af politikerrollens betydning for politikeres adfærd (og
rollers betydning for adfærd mere generelt), ligesom man også med fordel vil
kunne fortsætte undersøgelsen af kontekstfaktorers betydning for måden