MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · MOTIVATING AGENTS TO ACQUIRE INFORMATION Charles...
Transcript of MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · MOTIVATING AGENTS TO ACQUIRE INFORMATION Charles...
MOTIVATING AGENTS TO ACQUIREINFORMATION∗
Charles Angelucci†
June 2015
AbstractIt is difficult to motivate advisors to acquire information through
standard performance-contracts when outcomes are uncertain and/oronly observed in the distant future. Decision-makers may howeverexploit advisors’ desire to influence the outcome. I build a model inwhich a decision-maker and two advisors care about the decision’s con-sequences, which depend on an unknown parameter. To reduce uncer-tainty, the advisors can produce information of low or high accuracy;this information becomes public but the decision-maker cannot assess itsaccuracy with certainty, and monetary transfers are unavailable. I showthat, when the decision-maker can commit to a decision-rule, he shouldbehave as if the provided information is more accurate than it actuallyis, and thus grant excessive importance to the produced information.
Keywords: Information acquisition, experts, communication, advice.
JEL Classification Numbers: D00, D20, D80, D89 P16.
∗I am deeply indebted to Philippe Aghion, Emmanuel Farhi, Oliver Hart, Patrick Rey,and Jean Tirole for their invaluable guidance and support. I am also grateful to NavaAshraf, Daisuke Hirata, and Andrei Shleifer for insightful comments. This paper has alsobenefited from discussions with Julia Cage, Wouter Dessein, Alfredo Di Tillio, MatthewGentzkow, Renato Gomes, Ben Hebert, Christian Hellwig, Emir Kamenica, Nenad Kos,Thomas Le Barbanchon, Lucas Maestri, Eric Mengus, Simone Meraglia, Marco Ottaviani,Nicola Persico, Alexandra Roulet, Alp Simsek, Kathryn Spier, David Sraer, Eric Van denSteen, and audiences at Harvard, HBS, Yale SOM, Chicago Booth, Columbia, Kellogg,WUSTL, Toulouse, Bocconi, CUNEF, HEC, Rochester, Ecole Polytechnique, UniversiteParis Dauphine and the Petralia Workshop. Opinions and errors are mine.†Columbia Business School. Email: [email protected]
1
1 Introduction
Information acquisition is central to decisions made under uncertainty, yet
it often relies on intermediaries. For instance, regulatory agencies benefit from
internal staff expertise; judges rely on prosecutors to collect evidence; and
managers ask subordinates to estimate projects’ returns. However, delegating
information acquisition responsibilities triggers a natural concern about the
quality of the evidence upon which the decision will be made.
In many environments, performance-based contracts offer a powerful means
ensuring the provision of accurate information from experts (e.g., portfolio
managers). In other environments, performance is difficult to measure and
so alternative levers must be found. This is particularly true when the ap-
propriateness of decisions becomes apparent only in the distant future (e.g.,
regulatory actions), if ever (e.g., judicial sanctions).1 In addition, specifying
the desired quality of advice in a contract—to reward the information itself
rather than the outcomes—is often prohibitively costly or unenforceable in a
court of law, so that not only is performance difficult to assess, but also mon-
etary transfers are a rather coarse instrument.2 However, in many cases, a
decision-maker may be capable of motivating information-acquisition by ex-
ploiting her advisors’ desire to influence the outcome.3 This alternative channel
1The appropriateness of a past decision may also be hard to measure in absence of clearcounterfactuals (e.g., pollution standards).
2In many environments, monetary transfers contingent on performance are banned al-together (e.g., regulatory hearings, expert witnesses in courts of law, etc). See Szalay [2005]and Alonso and Matouschek [2008] for discussions of this matter. In other settings—e.g.,scientific journals—the decision-maker has only a limited budget (if any) to allocate.
3An advisor’s desire to influence the outcome may be due to his reputational concerns,or simply because the decision affects him directly.
2
of influence is the object of this paper.
I develop a model whose main insight is that the decision-maker should
make its decision exceedingly sensitive to the provided information. In par-
ticular, in certain informational environments, the decision-maker should act
as if the information provided is more accurate than it actually is. This, I
show, is the optimal way of structuring the final decision in a way that makes
it privately interesting for her advisors to acquire costly but accurate informa-
tion. I argue that this result squares well with the way organizations motivate
information acquisition and make decisions in practice.
A decision-maker interacts with one or two agents before taking a (contin-
uous) action. All parties are interested in the decision’s consequences, which
depend on some unknown state of the world.4 In fact, for most of the analysis,
players are assumed to agree on which decision is optimal when they have
access to the same information. To reduce uncertainty, agents privately ac-
quire one of two informationally-ranked signals about the state of the world,
with the more accurate signal being also the costliest one. The decision-maker
wishes to motivate agents to acquire the more accurate signal, but is unable
to infer with certainty which signal was acquired by looking at its realization.5
Monetary transfers contingent on the information itself are ruled out, and thus
the choice of the final action is the only means by which the decision-maker
can motivate the agents.6 The decision-maker is assumed capable of commit-
4See for instance Prendergast [2007], [2008] on the intrinsic motivation of bureaucrats.5The signals share a common support.6In the spirit of Grossman and Hart [1986] and Hart and Moore [1990], I posit that a
contract specifying monetary transfers as a function of the information provided cannot bedeposited in a court of law.
3
ting to a decision rule, which specifies the action to implement as a function of
the information gathered by the agents. This provides a good approximation
of the settings in which decision-makers can restrain their behavior through
pre-specified decision-making procedures (e.g., regulatory procedures, eviden-
tiary rules in the judicial system).7 Finally, the informational environment I
consider is such that higher realizations of either signal lead to correspondingly
higher actions preferred by the decision-maker and the agents, that is, each
player’s schedule of desired actions is upward sloping.
I first look at the version of the model with one agent. In this environment
I show that, to induce the acquisition of the more accurate signal, it is optimal
for the decision-maker to implement a higher (lower) action than desired when
observing a realization that is such that the agent would desire a lower (higher)
action under the less accurate signal. Such a distortion hurts the agent (and
the decision-maker) under the more accurate signal, but relatively more so
under the less accurate signal, thereby motivating information acquisition.
I then restrict attention to environments in which signals induce sched-
ules of desired actions that are rotations one of another around the action that
would be implemented under the prior belief (i.e., the status quo). In this envi-
ronment, I show that it is optimal for the decision-maker to implement higher
actions than desired when observing high realizations—realizations that lead
to desired actions higher than the status-quo under either signal, and lower
actions than desired when observing low realizations—i.e., realizations that
lead to desired actions smaller than the status-quo under either signal. In
7Moreover, judges’ decisions are potentially subject to review by higher courts.
4
other words, it is optimal for the decision-maker to act as if the information
provided is more accurate than it actually is. Introducing an upward (down-
ward) distortion when observing a high (low) realization motivates the agent
to acquire the signal of high accuracy and have his posterior beliefs be less
anchored on the status-quo.
It is common practice for organizations to commit to ex-post inefficient
rules to foster information acquisition by its members (e.g., through the del-
egation of decision rights, the choice of default-actions, etc).8 One approach
close in spirit to the one advanced in this paper is for organizations to put ex-
cessive weight on the information produced by their own staff. Choosing and
committing to a weight to put on certain types of information (e.g., estimations
and forecasts) is routinely done by administrations which, for democratic and
efficiency reasons, must commit to transparent decision-making procedures
(e.g., US Sentencing Guidelines). An alternative approach consists of dis-
missing already available relevant information (e.g., reputation, past history,
information coming from the outside). For instance, the non-admissibility of
certain relevant pieces of evidence in the inquisitorial system of law (e.g., crim-
inal records) gives incentives to the prosecutor to find better information.9 10
More generally, it has been documented by organizational sociologists that
decision-makers within firms ignore relevant information or have poor “orga-
nizational memory”, (see for instance Feldman and March [1981] and Walsh
8See for instance Aghion and Tirole [1997], Dessein [2002], and Szalay [2005] on delega-tion, and Li [2001] on the choice of default-actions.
9See Lester et al. [2012] on a related matter.10Similarly, the editor of a scientific journal may acquire a reputation for not taking
into account the authors’ publishing records, so that the referee’s input becomes even morepivotal.
5
and Ungson [1991]). My paper provides one possible efficiency rationale: by
disregarding available information, firms motivate employees by making their
(better) information pivotal.
If to motivate information acquisition by one agent involves making the
final decision excessively sensitive to the information he provides, it is not
clear a priori how this intuition survives when listening to several agents. To
investigate this issue, I extend the baseline model by incorporating a second
agent. To motivate each agent, I show that it is optimal for the decision-maker
to once again acts as if the jointly provided information is more accurate than
it actually is.
The rest of the paper is structured as follows. A short review of the liter-
ature concludes the introduction. Section 2 presents the setup of the model.
Section 3 characterizes the optimal decision rule for the baseline model with
one agent (Section 3.1); it then turns to the analysis with multiple agents
(Section 3.2). Section 4 provides a discussion, and Section 5 concludes.
Related Literature
This paper contributes to a recent strand of literature that incorporates
information acquisition in principal-agent models without monetary transfers
or, more generally, with limited or costly transfers (see, for instance, Aghion
and Tirole [1997], Dewatripont and Tirole [1999], Szalay [2005], Gerardi and
6
Yariv [2008a], Krishna and Morgan [2008], and Che and Kartik [2009].11,12
Unlike Aghion and Tirole [1997] and Che and Kartik [2009], I assume that
the decision-maker can commit to an action to implement as a function of the
information provided. Szalay [2005] considers a set-up with, on the one hand, a
single agent who acquires costly information and, on the other hand, a decision-
maker unable to verify the validity of the acquired information. Szalay shows
that to induce information acquisition, it is optimal to delegate decision rights
to the agent while eliminating from the decision set those actions that the latter
would implement when uninformed. My paper is complementary to Szalay’s
in that it considers instances in which the decision-maker is able to verify the
acquired information. In the single-agent case, I show that the decision-maker
retains authority, possibly implements all actions, and makes the final decision
exceedingly sensitive to the agent’s information. In particular, the decision-
maker introduces small distortions when the information provided confirms
the prior, and large ones otherwise. Within this literature, to the best of
my knowledge, this paper is the first to investigate the issue of motivating
information acquisition by multiple agents in environments of nontransferable
utility.13
This paper is also closely related to the literature on experts and their
11See also Austen-Smith [1994], Dur and Swank [2005], Che and Kartik [2009], Argen-ziano et al. [2013] and Di Pei [2013] for models with information acquisition but lack ofcommitment.
12For models in which performance-contracts are available, see Demski and Sappington[1987] and Gromb and Martimort [2007].
13The presence of multiple experts has however been considered in other environ-ments (though most often without information acquisition). See for instance Milgrom andRoberts [1986], Krishna and Morgan [2001], Battaglini [2004], Ambrus and Takahashi [2008],Gentzkow and Kamenica [2011], Gul and Pesendorfer [2012], and Bhattacharya and Mukher-jee [2013].
7
career concerns, which typically posits that experts have no intrinsic interest
in the decision being made, but wish to signal that they are of high ability (see
Osband [1989], Scharfstein and Stein [1990], Holmstrom [1999], Ottaviani and
Sorensen [2006], and Bar-Isaac [2008]). In many of these papers, it is also the
case that the expert is delegated decision rights (see Demski and Sappington
[1987], Prendergast and Stole [1996], Milbourn et al. [2001], Prat [2005], Dai
et al. [2006], Malcomson [2009], and Fox and Van Weelden [2012]). To focus
on the implications of the agent’s interest in the outcome of the decision, I
ignore signalling issues. Also, in my environment, the delegation of decision
rights is never desirable.
The fact that it may be optimal for the decision-maker to exaggerate the
accuracy of the information provided is reminiscent of the optimality of hyper-
powered incentives schemes present in models of procurement that incorporate
information acquisition (see for instance Lewis and Sappington [1997] and Sza-
lay [2009]).14 In addition to showing that the spirit of this finding is present
also in models of expertise with nontransferable utility, I investigate whether it
survives in presence of multiple agents. More generally this paper contributes
to the mechanism design literature that explicitly considers information ac-
quisition (see Kim [1995], Persico [2000], Bergemann and Valimaki [2002], and
Shi [2012]).
This paper is also related to the literature on persuasion and disclosure
(see Grossman [1981], Milgrom [1981], Milgrom and Roberts [1986], Glazer and
14In a setting in which the agent has private information regarding the accuracy of apublic signal about its costs of production, Chu and Sappington [2010] show that it may beoptimal to induce production levels above or below the first-best.
8
Rubinstein [2004], Glazer and Rubinstein [2006], and Kamenica and Gentzkow
[2011]).15,16 One insight delivered by the model is that inducing an agent to
disclose information limits the decision-maker’s ability to motivate information
acquisition.
This paper touches upon similar issues as those found in the literature on
committees, most notably the issue of keeping several agents sufficiently pivotal
that they would engage in information acquisition (see Gilligan and Krehbiel
[1987], Persico [2004], Gerardi and Yariv [2008a], Gershkov and Szentes [2009],
and Hao and Suen [2009b] for a survey).
It also relates to papers investigating whether it is preferable for a decision
maker to take advice from agents who share her preferences or beliefs (e.g.,
see Dur and Swank [2005], Gerardi and Yariv [2008a] and Che and Kartik
[2009]).17 In line with these papers, I find that when agents need to acquire
information, some heterogeneity may be desirable.
Although the environment I consider is such that delegating decision rights
is never optimal, some of the techniques and concepts used—in particular the
construction of incentive-compatible mechanisms—are also present in the liter-
ature on delegation (see for instance Holmstrom [1977], Melumad and Shibano
[1991], Alonso and Matouschek [2008], Goltsman et al. [2009], Armstrong and
Vickers [2010], and Nocke and Whinston [2013]), which also assumes commit-
15See Milgrom [2008] for a survey.16The case of unverifiable information has also received much attention, starting with the
seminal paper by Crawford and Sobel [1982] and with more recent contributions (Dessein[2002], Battaglini [2004], Martimort and Semenov [2008], Kartik [2009], and Che et al.[2013],). See Sobel [2010] for a survey of the literature on Cheap Talk.
17For a model investigating this issue, but without information acquisition, see Landieret al. [2009].
9
ment.18 Furthermore, some aspects of my analysis—and, in particular, the role
played by the likelihood ratio—are reminiscent of the early analyses of moral
hazard (see Mirrlees [1976], Holmstrom [1979], Holmstrom [1982], Grossman
and Hart [1983],and Mirrlees [1999]), except that in my environment the moral
hazard problem cannot be solved with monetary transfers. Finally, I build
upon recent advances in comparative statics techniques to model the infor-
mation acquisition process (in particular Persico [2000] and Athey and Levin
[2001]) and use an information criterion introduced by Johnson and Myatt
[2006].19
2 The Setting
I consider a game with a decision-maker, DM , and up to two agents, A1
and A2. DM has payoff function u (a, ω), and Ai, for i = 1, 2, has payoff
function u (a, ω)−C (ei). Payoff functions depend on DM ’s action a ∈ R and
the unobservable state of the world ω ∈ Ω ⊆ R. The function C (ei) represents
the cost to Ai of exerting effort ei, and is explained in Section 2.2.
The state of the world ω is a random variable W with cumulative distribu-
tion function FW (ω) on support Ω. W is taken to have full support, i.e., the
associated density function satisfies fW (ω) > 0 for any ω ∈ Ω.
Finally, monetary transfers are assumed unfeasible and thus do not enter
payoffs. The importance of this assumption is discussed in Section 4.
18See Van den Steen [2008] for a set-up in which it is the potential difference of opinionsbetween players that paves the way to the delegation of decision-rights.
19See Shi [2012] for a model on auctions also focusing on this informational environment.
10
2.1 Payoff Functions
I restrict my attention to gross payoff functions u (a, ω) that are continuous
in both arguments, strictly concave in a, and whose second-order derivative
with respect to a is independent of both ω and a.20 Common functional forms
that satisfy these conditions include, for instance, − (ω − a)2 and ωa − a2
2.
Moreover, let
a(ω) := argmaxa∈R
u (a, ω)
denote the players’ ideal action when W = ω. I assume the payoff function is
such that a(ω) is strictly increasing in ω. Players wish to match higher states
with higher actions.
2.2 Effort and Information Acquisition
Ai privately chooses his effort level ei ∈ E, which translates into the acqui-
sition of signal Xei that he chooses from a set of signals Xee∈E. A signal Xe
is distributed as GXe|W (x|ω). All signals share common support X ⊂ R, and
x denotes a typical realization. Let GXe (x) = EW[GXe|W (x|ω)
]represent the
marginal distribution of signal Xe, with associated density function gXe(x).
Agents have access to the same set of signals Xee∈E, and realizations are
conditionally independent.
Throughout the paper, I maintain the assumption that effort levels (i.e.,
signals) are privately chosen (i.e., unobservable to the other players), but that
20Assuming the second-order derivative with respect to a is independent of both ω and aimplies that a player’s expected payoff under a given belief about ω is a translation of thisplayer’s expected payoff under any other belief about ω.
11
signal realizations are publicly observable.
If effort levels (i.e., signals) were observable, the players’ desired action
when observing Xe1 = x and Xe2 = x′ would be:
a (x, x′ | Xe1 , Xe2) := argmaxa∈R
∫Ω
u (a, ω) dFW |Xe1 ,Xe2 (ω | x, x′) ,
where updating occurs according to Bayes’ rule. suppose that signals are such
that the players’ ideal action is, all else equal, nondecreasing in each signal’s
realization, regardless of which signals are acquired.21
For most of the analysis, I suppose E = e, e, and let X = Xe andX = Xe.
Moreover, I assume signalX to be more informative than signal X, in the sense
that:
∫X
∫X
∫Ωu(a(x, x′ |X,Xe2 , ω
)dFW |X,Xe2
(ω | x, x′
)dgX (x) dGXe2
(x′)>
∫X
∫X
∫Ωu(a(x, x′ | X,Xe2 , ω
)dFW |X,Xe2
(ω | x, x′
)dgX (x) dGXe2
(x′),
∀Xe2 .22 Moreover, I assume C (e) = 0 < C (e) = c. Acquiring the more
informative signal implies a higher disutilty to the agents.
To ensure an interior solution, I will be requiring that the signals X andX
are such that the highest value the likelihood ratiogX(x)
gX(x)takes over the support
X is lower than some threshold, where this threshold is problem-specific.23
21For most applications of my analysis, it is enough to restrict attention to signalsthat induce conditional densities for W that satisfy the monotone likelihood ratio property(MLRP). Recall that the conditional density function fW |X has the MLRP if for x′ > x,
fW |X (ω | x′) /fW |X (ω | x) is nondecreasing in ω.22Because agents have access to identical signals X andX, it is also the case that, holding
Xe1 constant, DM ’s expected payoff is higher when Xe2 =X than when Xe2 = X.23This assumption is made to focus on the most interesting case in which DM is unable
12
Finally, note that because of the common support assumption, DM will be
unable to infer with certainty which signal was acquired when observing its
realization.24
2.2.1 Rotations
I sometimes focus on payoff functions and signals that induce schedules of
desired actions that are rotations one of another. To ease exposition, suppose
there is one agent only. The intuition for the case with two agents is provided
in Section 3.2. The players’ schedule of desired actions induced by signal
X, denoted a (x) = a(x |X
)is a rotation of the schedule of desired actions
induced by signal X, denoted a (x) = ai (x | X), around some realization x if:
(a (x)− a (x)) (x− x) > 0 ∀x 6= x.
In these environments, I refer to the realizations higher than x as high real-
izations. The desired action when observing a high realization is higher under
signalX than under signal X. Conversely, I refer to the realizations lower than
x as low realizations. The desired action when observing a low realization is
lower under signal X than under signal X.
Many payoff functions and signals lead to schedules of desired actions that
to (almost) implement the first-best by committing to take extreme actions when observ-ing certain realizations x highly informative of the effort level. The intuition for this isreminiscent Mirrlees [1999]).
24Take the case of an agent collecting data and carrying out a statistical analysis (say,computing a correlation coefficient). Although the decision-maker can replicate the statis-tical analysis if given access to the data, she may lack the time or resources to verify theactual representativeness of the sample, and cannot infer it perfectly from observing thestatistical output.
13
can be depicted as rotations one of another. Suppose, for instance, that payoff
functions are such that the players’ desired actions are equal to the expected
value of W (e.g., payoffs are of the form − (ω − a)2). Then, an environment in
which Ω = R, W ∼ N (0, 1), X = ρW +√
1− ρ2ε, and X = ρW +√
1− ρ2ε,
with 0 < ρ < ρ < 1, leads to schedules a (x) = ρx and a(x) = ρx, which are
rotations one of another around x = 0. Moreover, one can compute that the
likelihood ratiogX(x)
gX(x)is constant and equal to 1.
Similarly, an environment in which Ω = X = R+, with fW (ω) = 1λe−
ωλ ,
fW |X(ω |X = x
)= 1
λxe−
ωλx , and fW |X (ω | X = x) = 1
λxe−
ωλx , where 1 < λ <
λ < λ, yields schedules a (x) = λx and a (x) = λx, which are are rotations one
of another around x = 0.
Further examples include environments in which W ∼ U [0, 1], X = W + ε,
and X = W + ε, with ε ∼ N(
0, 1α
)and ε ∼ N
(0, 1
α
), and with X (ω) =
[ω − ν, ω + ν].25
Decision Rules and Timing Most of the analysis is carried out assuming
that DM can credibly commit to a deterministic decision rule a (x1, x2), which
is a mapping from signals into actions.26 The timing is as follows. First, DM
announces a decision rule. Second, agents privately choose their signals. Third,
the realizations of the signals are observed by all and, fourth, DM implements
the action specified in the decision rule.
25As a last example, one can also consider an environment in which W ∼ N(
12 ,
1β
)on
Ω = [0, 1], with X = W and X ∼ U [0, 1].26In the environment with public information, the framework is one of moral hazard with
binary actions. Using more general communications schemes—e.g., attempting to elicit theagents’effort levels—would not improve DM ’s payoff.
14
Finally, for simplicity, participation constraints at the agents’ level are
ignored. This is a natural assumption when agents are directly influenced by
the decision being made, so that their “participation” is not a choice. A brief
discussion is devoted to this issue in Section 4.
3 The Model
3.1 The Single-Agent Case
We begin by analyzing the single-agent case by assuming that DM does
not observe which signal is acquired but does observe its realization. Since we
here treat the case with one agent only, we simplify the notation by writing
DM and A’s preferred actions under signal X and signal X as, respectively,
a(x) and a(x).
The decision-maker’s problem. To focus on the case of interest, I assume
that A’s disutility c when acquiring signal X is high enough that the former
has insufficient private incentives to acquireX if DM implements her preferred
action under signal X for ∀x. Formally, I assume that
∫X
∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x)−c <
∫X
∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) .
If the inequality were reversed, then, rather trivially, it would always be op-
timal for DM to induce A to acquire signal X and systematically implement
a (x) for ∀x. The immediate consequence of this assumption is that DM must
necessarily introduce distortions in the decision rule to induce the acquisition
15
of signalX, so that it is unclear a priori whether such a policy is desirable. In
the following, again to focus on the case of interest, assume that it is optimal
for the decision-maker to induce the agent to acquire signal X.27
Suppose DM wishes A to acquire signal X. She then designs the schedule
of decisions a (x) to maximize her expected payoff:
∫X
∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) , (1)
subject to inducing A to acquire signal X rather than signal X:
∫X
∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x)−c ≥
∫X
∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) .
(2)
Inequality (2) tells us that DM must design the decision rule a (x) so that it is
a sufficiently better match with the posterior beliefs induced by signalX than
with those induced by signal X. The existence of a solution is guaranteed by
the strict concavity of payoff functions and the unboundedness of A.28
I solve this problem by setting up a Lagrangian, where γ is the multi-
27It will be so whenever the informational gain outweighs the payoff loss associated tothe distortions.
28To see this, consider two realizations x′ and x′′, such that x < x′ < x′′, and such thatgX(x′)
gX(x′) > 1 , and set a (x) = aA(x) + η for ∀x ∈ [x′, x′′], where η > 0, and, say, a (x) = aA(x)
for all other realizations. Then, observe that, ∀x ∈ [x′, x′′], we have that:
limη→∞
[gX(x)
∫Ω
v (aA (x) + η, ω) dFW |X (ω | x)− gX(x)
∫Ω
v (aA (x) + η, ω) dFW |X (ω | x)
]= +∞,
since (i)∫
Ωv (a, ω) dFW |X(ω | x) and
∫Ωv (a, ω) dFW |X(ω | x) are strictly concave
(and with identical second-order derivatives), (ii)gX(x)
gX(x) > 1, and (iii) since, by def-
inition of x > x, aA(x) > aA(x) for ∀x ∈ [x′, x′′]. This, in turn, implies that∫ x′′
x′ gX(x)∫
Ωv (+∞, ω) dFW |X (ω | x) dx −
∫ x′′
x′ gX(x)∫
Ωv (∞, ω) dFW |X (ω | x) dx = +∞,
and thus that inequality (2) holds under this hypothetical solution.
16
plier associated with (2). The decision-maker then chooses a (x) to maximize
pointwise: ∫X
∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) (3)
+γ
∫X
(gX (x)
∫Ωu (a (x) , ω) dFW |X (ω | x)− c− gX (x)
∫Ωu (a (x) , ω) dFW |X (ω | x)
)dx.
The assumptions made regarding the payoff function u (a, ω) (see Section 2)
ensure that this optimization problem is convex ∀x ∈ X , i.e., that the first-
order condition is necessary and sufficient ∀x.
In what follows, let K = x ∈ X | a(x) = a(x), that is, let K be the
set of realizations such that the players’ schedules of desired actions under
signals X and X intersect. Similarly, let K+ = x ∈ X | a(x) > a(x) and
K− = x ∈ X | a(x) < a(x).
Proposition 1 Suppose the decision-maker wishes the agent to acquire signal
X. The optimal decision-rule specifies an action a∗ (x) such that:
1. a∗ (x) = a (x) if x ∈ K,
2. a∗ (x) > a (x) if x ∈ K+, and
3. a∗ (x) < a (x) if x ∈ K−.
Moreover, when x /∈ K, the distortion | a∗ (x) − a (x) | is increasing in |
a(x)− a(x) | andgX(x)
gX(x).
Proof. See Appendix 1.
Proposition 1 states that if A has insufficient private incentives to gather
signal X, then DM has no choice but to introduce an upward (downward)
17
distortion when the realization x is such that A’s desired action is higher
(lower) in case he has exerted high effort (i.e., acquired X) than in case he
has shirked (i.e., acquired X). Even though introducing such distortions hurts
both DM and A in case the latter has exerted high effort, it hurts relatively
more so A in case he has shirked, because the implemented action is then
very distant from what he desires. Specifically, because payoffs are concave,
departing from the ex-post optimal action a (x) in the opposite direction than
a (x) involves a second-order payoff loss to DM and A in equilibrium, but a
first-order payoff loss to A in case he has shirked, which increases A’s incentives
to exert high effort. Building on this logic, the greater the distance between
what A desires in and out of equilibrium is (i.e., the greater | a(x)− a(x) | is),
the more it pays to introduce a distortion, and thus the greater the distance
between a∗ (x) and a (x) is. Similarly, the greater the likelihood ratiogX(x)
gX(x), the
more the distortion hurts A relatively more under signal X than under signal
X, and thus the greater the distortion. As in standard moral hazard problems,
DM behaves as if interpreting a high likelihood ratiogX(x)
gX(x)as indicative of A
having shirked, and conversely for low likelihood ratios. Finally, when the
realization is such that A desires the same action in and out of equilibrium,
introducing a distortion does not help incentivize information acquisition.
To summarize, to motivate the acquisition of accurate information, DM
has no choice but to make the final decision excessively sensitive to the agent’s
information. If the agent was to (unobservably) deviate and acquire signal X,
the final decision would be an overwhelmingly poor match with the posterior
beliefs induced by that signal.
18
In the next two corollaries, we impose more structure on the informational
environment.
Corollary 1 If the schedule a(x) is a rotation of the schedule a(x), then the
optimal decision-rule a∗ (x) is also a rotation such that:
(a∗ (x)− a (x)) (x− x) > 0 ∀x 6= x.
Moreover, if the schedule a(x) is a steeper rotation of the schedule a(x), and
the likelihood ratiogX(x)
gX(x)is non-decreasing in x as | x − x | increases, then
a∗ (x) is a steeper rotation of a(x) around x.
Proof. See Appendix 1.
Corollary 1 states that if A has insufficient private incentives to gather
signalX, and the schedule of desired actions induced by signalX is a rotation
of that induced by signal X, then DM designs a schedule of decisions which is
itself akin to a rotation around the realization x. In particular, the optimal de-
cision rule involves implementing actions higher than desired when observing
high realizations, and actions lower than desired when observing low realiza-
tions. The reason as to why introducing such distortions induces information
acquisition is as follows. Take a given high realization x (the logic is symmetric
for low realizations), and recall that high observations are those for which both
the decision-maker and the agent desire a higher action under signal X than
under signal X. By implementing an even higher action than desired under
signalX, the decision-maker hurts the agent under either signal, but relatively
19
more so under signal X. This is because players suffer increasingly more as
the implemented action becomes more distant from their ideal action. (Note
that the ex post optimal course of action a(x) is by construction farther away
from the agent’s desired action under signal X than that under X. However,
if the cost c is high enough, this is insufficient to motivate the acquisition of
accurate information.)
Finally, if the schedule a(x) is a steeper rotation of the schedule a(x),
and the likelihood ratiogX(x)
gX(x)is non-decreasing in x as | x− x | increases, the
magnitude of these distortions—that is, the distance between a∗(x) and a(x)—
becomes larger the more extreme the realization x is (i.e., the farther away x is
from x). It is optimal to do so because, in relative terms, realizations become
increasingly more likely under signal X than under signal X as x becomes
more distant from x, so that distortions become increasingly more costly to
the agent under signal X (in expectation).
Proposition 2 At the optimum, the decision-maker behaves as if signal X is
more informative than it is actually is if:
1. u (a, ω) is such that aX(x) = EW |X=x [ω],
2. the schedule a(x) is a steeper rotation of the schedule a(x) around x, and
3. x is such that EW |X=x [ω] = EW |X=x [ω] = EW [ω].
Proof. From Proposition 1, we know the optimal decision-rule is such that
a∗(x) > a(x) when x > x, and conversely when x < x. Moreover, if aX(x) =
20
EW |X=x [ω], then preferences must be of the form:
l (ω) + ωa− a2
2,
where l (ω) : Ω −→ R. When preferences are of this form, signalX is more in-
formative than signal X if and only if VX
[EW |X=x [ω]
]> VX
[EW |X=x [ω]
].
Because a∗(x) is a steeper rotation of a(x) around x, it follows that
VX [a∗(x)] > VX
[EW |X=x [ω]
]= VX [a(x)].
When preferences are such that players wish to match the action with the
expected value of W , and when signalsX and X are such that it is optimal to
put more weight on x when it is generated by X (i.e., when a(x) is a rotation
of a(x)), DM optimally puts excessive weight on the realization x to motivate
information acquisition. The resulting schedule of implemented actions is itself
a rotation of a(x), as if the provided information was generated by a signal
more informative than X.
3.1.1 An Explicit Solution
To gain further insights I now specify preferences and distributions so as
to compute explicit solutions. Suppose u (a, ω) = − (ω − a)2, W ∼ N (0, 1),
and let the agent’s signals be constructed as:
X = ρW +√
1− ρ2ε,
and
X = ρW +√
1− ρ2ε,
21
where 0 < ρ < ρ < 1, and where ε ∼ N (0, 1) and is independent of W .
One may compute that X | W = ω ∼ N (ρω, 1− ρ2) and W | X = x ∼
N (ρx, 1− ρ2), for X ∈X,X
. In addition, we have that a(x) = ρx and
a(x) = ρx. Finally, one may also verify that X ∼ N (0, 1) and X ∼ N (0, 1),
implying that the likelihood ratio is constant and equal to 1, thereby making
the analysis particularly amenable to comparative statics.29
Solving the optimization problem presented in Subsection 3.1 leads to the
following proposition.30
Proposition 3 If c ≤ 2ρ(ρ− ρ
), then a∗ (x) = ρx. Otherwise, the optimal
decision-rule is given by:
a∗ (x) =
(c
2(ρ− ρ
))x,where c
2(ρ−ρ)> ρ.
Proof. See Appendix 1.
Proposition 3 tells us that the decision-maker must introduce a distortion
in the decision rule whenever the cost c of acquiring signal X is sufficiently
high. In addition, these distortions are increasing in the cost c and decreasing
in the informational superiority of signal X compared to signal X, measured
by ρ− ρ. To see the latter comparative static, recall that the agent also cares
about the accuracy of the match between the action and the state of the world.
29However, we have that cov(W,X
)= ρ > cov (W,X) = ρ
30In solving for the solution, I relax the assumption, made otherwise in the paper, thatc is large enough that the incentive compatibility constraint always binds.
22
Corollary A threshold c exists such that it is strictly optimal to induce the
agent to acquire signal X if and only if c < c.
Proof. The decision-maker’s expected payoff when inducing the agent to
acquire signal X (and c > 2ρ(ρ− ρ
)) is equal to:
−1 + 2ρ
(c
2(ρ− ρ
))−( c
2(ρ− ρ
))2
,
while the decision-maker’s expected payoff when inducing the agent to instead
acquire signal X is equal to:
−1− ρ2 + 2ρ2. (4)
The statement in the corollary immediately follows from equating both ex-
pected payoffs and rearranging for c.
Rather intuitively, it is optimal for the decision-maker to induce the acqui-
sition of signalX only if its cost is not too high and/or the associated reduction
in uncertainty is sufficiently large.
3.2 The Two-Agent Model
We have learnt from the single-agent case that, to motivate information
acquisition, the agent has to be made excessively influential, in the sense that
the final decision must be made excessively sensitive to the information pro-
vided. In this section we investigate how this intuition survives in the presence
of multiple agents. For the sake of exposition, I assume that all three players
23
share (gross) payoff function u (a, ω).
Suppose now that there are two agents, agent 1 and agent 2, where
each may either acquire less accurate signal X, at zero cost, or more ac-
curate signal X, at cost c. Recall that realizations are conditionally in-
dependent. To save on notation, since players have identical gross pay-
offs, in the following, let aX1,X2 (x1, x2) = a (x1, x2 | X1, X2). For simplicity,
I restrict the analysis with two agents to environments whereby player i’s
schedule of desired actions induced by signal X is a steeper rotation of the
schedule induced by signal X around some realization xi (xj), for ∀Xej and
∀xj. Moreover, the realization xi (xj) is such that EW |X,Xej [ω | xi (xj) , xj] =
EW |X,Xej [ω | xi (xj) , xj] = EW |Xej [ω | xj]. Finally, I suppose the likelihood ra-
tiogX(x)
gX(x)is nondecreasing in x as x becomes distant from x, where x is defined
as EW |X [ω | x] = EW |X [ω | x] = a0.
In this section, we focus on the case of interest, that is, we look at the case
in which the decision-maker induces each agent to acquire his more accurate
signal. Extending the analysis to the other possible cases is immediate.
The decision-maker now chooses schedule a (x1, x2) to maximize pointwise
her expected payoff:
∫X
∫X
∫Ω
u (a(x1, x2), ω) dFW |X,X(ω | x1, x2)dGX(x)dGX(x), (5)
subject to inducing agent 1 to acquire signal X (with associated multiplier λ):
∫X
∫X
∫Ωu (a(x1, x2), ω) dFW |X,X(x1, x2)dGX(x)dGX(x)− c ≥ (6)
24
∫X
∫X
∫Ωu (a(x1, x2), ω) dFW |X,X(x1, x2)dGX(x)dGX(x),
and subject to induce agent 2 to acquire signal X (with associated multiplier
γ): ∫X
∫X
∫Ωu (a(x1, x2), ω) dFW |X,X(x1, x2)dGX(x)dGX(x)− c ≥ (7)
∫X
∫X
∫Ωu (a(x1, x2), ω) dFW |X,X(x1, x2)dGX(x)dGX(x).
The decision-maker needs to design a schedule a(x1, x2) which is on average
a sufficiently better match with the posterior beliefs jointly induced by the
pair of signalsX1 = X,X2 = X
than with those jointly induced by both
the pairsX1 = X,X2 = X
and
X1 = X,X2 = X
.31
Recall from the single-agent case that the signs of the distortions depended
on whether the realization x was “high” or “low”, that is, depended on whether
the desired action when observing x under signal X was higher or lower than
that under signal X. These considerations are still going to be useful in this
analysis with two agents, but some slight clarifications are first warranted. The
notion of “low” and “high” realization/posterior belief must now be extended
to account for the fact that posterior beliefs are now jointly induced by the
signals X1 and X2. In particular, a distinction must be made whether we are
at the stage of the game in which realizations are known, or prior to this.
Take agent 1 for instance. His posterior beliefs after having observed
a given realization x2, and anticipating that agent 2 chose X, are still in-
creasing in the realization x1 of the chosen signal. In addition, as before,
31Much like the single-agent case, a solution to this problem exists. In addition, by virtueof the restrictions imposed on payoff functions, this optimization problem is convex.
25
his posterior beliefs are still increasing more steeply with x1 under signal
X than under signal X. However, since the higher the realization x2 is,
the higher is agent 1’s desired action for any realization and chosen sig-
nal, we have that the ex-post rotation-point EW |X[ω | X = x2
]is increasing
in x2, i.e., dx1(x2)dx2
> 0, where x1(x2) is defined as the realization x1 such
that EW |X,X[ω |X = x1 (x2) ,X = x2
]= EW |X,X
[ω | X = x1 (x2) ,X = x2
]=
Eω|X[W |X = x2
].32
However, prior to knowing the realization of agent 2’s signal, the relevant
rotation-point is still EW [ω], because agent 1 expects agent 2’s signal to con-
firm his prior on average.33 This last remark is of importance since the decision
rule is designed under the prior belief. In the following, let x denote as before
the rotation-points computed under the prior belief.
Before proceeding to the characterization of the optimal decision rule, I
provide an example to gain further intuition.
Example. I extend the framework provided in Section 4.1.1 to the case with
two agents. Suppose W ∼ N (0, 1), and let the two signals at the disposal of
agent i = 1, 2 be constructed as:
X = ρW +√
1− ρ2εi
and
X = ρW +√
1− ρ2εi,
32Not surprisingly, by an identical reasoning, we also have dx2(x1)dx1
> 0.33Again, by an identical reasoning, it is also the case that x2 = EW [ω].
26
where 0 < ρ < ρ < 1, and where εi ∼ N(0, 1) and is independent of W .
Suppose further that all three players have payoff function u (a, ω) =
− (ω − a)2. We may then compute that each player’s desired action under
signals X1 =X and X2 =X, when observing x1 and x2, is equal to:
aX,X (x1, x2) =
(1− ρ2
)ρ
1− ρ4 (x1 + x2) , (8)
while that under signals X1 = X and X2 = X, when observing x1 and x1, is
equal to:
aX,X (x1, x2) =
(1− ρ2
)ρx1 +
(1− ρ2
)ρx2
1− ρ2ρ2(9)
and that under signals X1 = X and X2 = X, when observing x1 and x2, is
equal to:
aX,X (x1, x2) =
(1− ρ2
)ρx2 +
(1− ρ2
)ρx1
1− ρ2ρ2(10)
From these expressions, it follows that agent 1’s “rotation” point, conditional
on having observed x2, and anticipating that agent 2 has acquired X, is:
x1(x2) =
(ρ
1− ρ
)(ρ2 − ρ2
ρ− ρ
)x2
while agent 2’s “rotation-point” conditional on having observed x1, and antic-
ipating that agent 1 acquired X is:
x2(x1) =
(ρ
1− ρ
)(ρ2 − ρ2
ρ− ρ
)x1
where(
ρ1−ρ
)(ρ2−ρ2
ρ−ρ
)< 1.
Figure 2 depicts the classification of realizations as to whether they are
27
“high” or “low” once realizations are known. The lines x1(x2) and x−12 (x2)
represent, respectively, agent 1 and agent 2’s rotation-points given the other
player’s realization. For a given realization x2, if x1 > x−12 (x2) then x2 is low.
Similarly, for a given realization x2, x1 is high if and only if x1 > x1(x2). The
areas filled in grey (A and B) are those for which both agents’ realizations
are either low or high. These are all the combinations of x1 and x2 for which
both agents’ information is congruent. In particular, each agent’s realization
is “high” (“low”) if and only if both realizations are positive (negative) and
not too different one from another. The area above the grey surfaces, i.e., area
C, represents all the combinations of x1 and x2 such that x1 is high and x2 is
low. Finally, the area below the grey surfaces, i.e., area D, represents all the
combinations of x1 and x2 such that x1 is low and x2 is high.
[Insert Figure 2 here]
Returning to the general framework, it therefore follows from our dis-
cussion that when x1 and x2 are both high then we must have that
aX,X (x1, x2) , aX,X (x1, x2) < aX,X (x1, x2). Further, when x1 and x2 are both
low, then aX,X (x1, x2) < aX,X (x1, x2) , aX,X (x1, x2). If instead x1 is high and
x2 is low then aX,X (x1, x2) < aX,X (x1, x2) < aX,X (x1, x2), and similarly when
x1 is low and x2 is high.
Proposition 4 Suppose the decision-maker wishes each agent to acquire his
more accurate signal. If (8) and (9) bind, then the optimal decision rule is
such that:
1. a∗ (x1, x2) > aX,X (x1, x2) if aX,X (x1, x2) > a0,
28
2. a∗ (x1, x2) = aX,X (x1, x2) if aX,X (x1, x2) = a0, and
3. a∗ (x1, x2) < aX,X (x1, x2) if aX,X (x1, x2) < a0.
Moreover, the optimal decision-rule a∗ (x1, x2) is such that∂aX,X(x1,x2)
∂x1<
∂a∗(x1,x2)∂x1
, for ∀x2, and such that∂aX,X(x1,x2)
∂x2< ∂a∗(x1,x2)
∂x2, for ∀x1.
Proof. See Appendix 2.
Intuition. When the information provided by the agents is congruent—in
the sense that x1 and x2 are both high or low given each other—the solution
is similar in spirit to that stated in Proposition 2. If both realizations x1
and x2 are high given each other, DM finds it best to introduce an upward
distortion and, contrarily, when both realizations are low given each other,
DM finds it best to introduce a downward distortion. To see this, observe
that when both realizations are, for instance, high, both agents desire a lower
action under their less accurate signals than under their more accurate signals.
By introducing an upward distortion, DM hurts agents under either signals,
but relatively more so in the case where they acquired the less accurate signals
since players suffer increasingly as the implemented action lies farther from
their preferred action.
However, the decision rule must be adapted to take into account the possi-
bility that the information provided by each agent is incongruent, in the sense
that one realization is high and the other is low. In these events, introducing,
say, an upward distortion will be useful in terms of incentive provision for the
agent whose realization is high, since his desired action under the less accurate
29
signal is lower than that under the more accurate one. However, for the other
agent, whose realization is low, introducing an upward distortion will dampen
his incentives, since it means implementing an action closer to his desired ac-
tion under the less accurate signal. In these instances, DM finds it best to
use all available information but, all else equal, behaves as if the information
provided by the agent whose signal’s realization is the most extreme is more
accurate than it actually is. The reason for this is twofold. First, given the
focus on an environment in which signals generate schedules of desired actions
that are rotations one of another, the most extreme realization is the one for
which the preferred action under the more accurate signal is the most distant
from the preferred action under the less accurate signal. Because agents suffer
increasingly more as the implemented action is farther from their ideal, and
ignoring the relative likelihood of the realization for now, it pays the most
to introduce a distortion targeting the agent whose signal’s realization is the
most extreme. In addition, because more extreme realizations become increas-
ingly more likely under the less accurate signal than under the more accurate
one, distortions become increasingly effective at providing incentives, which
reinforces the first effect. Finally, when realizations are incongruent, the most
extreme realization is also the one determining whether the desired action
aX,X (x1, x2) is higher or lower than a0. As a consequence, introducing, say,
an upward distortion when aX,X (x1, x2) > a0, mechanically targets the agent
whose realization is the most extreme, because the most extreme realization
is the realization pulling the expected value of W under the posterior belief.
We therefore learn from the analysis with two agents that it is optimal
30
for DM to make the implemented action exceedingly sensitive to the jointly
provided information.
4 Extensions
In this section, I extend the framework of Section 3.1.1 to investigate the
nature of the optimal decision rule when DM is upward biased compared to
A, and briefly comment on two modeling assumptions made throughout the
analysis.
We return to the single-agent case. We suppose u (a, ω) = − (ω − a)2.
4.1 The Continuous Effort Case
Suppose the agent exerts effort e in a compact interval E ≡ [0, e], and
incurs a cost c (e) (with ce (e) > 0 and cee (e) > 0). To avoid corner solutions,
let ce (0) = 0 and ce (e) = +∞. For simplicity, I suppose the agent’s effort
does not affect the marginal distribution G (X) but only the posterior beliefs
about W , FW (ω | x, e), where FW (ω | x, e) is taken to be twice differentiable
with respect to e. Moreover, for any two effort levels e′ > e, I assume:
(a(x, e′
)− a (x, e)
)(x− x) > 0, ∀x 6= x. (11)
In words, the schedule of desired actions under the higher effort level is a
steeper rotation around x of that under the lower effort level. Finally, for
simplicity, I suppose (i)∫
ΩωFW (ω) dω = 0 and (ii) a (x, e) = 0, ∀e.
The decision-maker chooses the decision-rule a (x) and an effort level e ∈
31
[0, e] to maximize her expected payoff:
−∫X
∫Ω
(ω − a(x))2 f (ω | x, e) dωdG (x) , (12)
subject to:
−∫X
∫Ω
(ω − a(x))2 f (ω | x, e) dωdG (x)− C (e) ≥ (13)
−∫X
∫Ω
(ω − a(x))2 f(ω | x, e′
)dωdG (x)− C
(e′),
for ∀e′ ∈ [0, e].
We replace the global constraint (13) by the simpler local constraint:
−∫X
∫Ω
(ω − a(x))2 fe (ω | x, e) dωdG(x) = Ce (e) , (14)
which is a necessary condition for an interior effort level e to constitute an
optimum. We first characterize the solution to the relaxed problem, and then
derive sufficient conditions for this solution to also satisfy (13), and thus con-
stitute a solution to the decision-maker’s problem. Denote the multiplier of
(14) by λ. Pointwise maximization with respect to a (x) yields:
a(x) =
∫Ωωf (ω | x, e) dω + λ
∫Ωωfe (ω | x, e) dω, ∀x, (15)
and the first-order condition associated to e is:
−∫X
∫Ω
(ω − a(x))2 fe (ω | x, e) dωdG(x) (16)
+λ
(−∫X
∫Ω
(ω − a(x))2 fee (ω | x, e) dωdG(x)− Cee (e)
)= 0.
32
Note from (15) that, because λ ≥ 0, a (x) is positive if x > x, equal to zero
if x = x, and negative if x < x. Provided λ > 0, the schedule a(x) is a
steeper rotation of the schedule a(x, e) around x. We now show that λ > 0.
The agent’s expected payoff under schedule a(x) and level of effort e can be
written as:34
−VarW [ω]−∫Xa(x)2dG(x) + 2
∫Xa(x)
∫ΩωfW (ω | x, e) dωdG(x)− C (e) . (17)
Differentiating (17) twice with respect to e yields:
2
∫Xa(x)Eee [ω | x, e] dG(x)− Cee (e) . (18)
It is sufficient that (i) Eee [ω | x, e] < 0 if x > x and (ii) Eee [ω | x, e] > 0 if
x < x for (18) to be negative; that is, for the agent’s expected payoff to be
concave. Suppose this condition holds. Finally, note the first term in equation
(16) is positive, by virtue of condition (14) and the fact that ce (e) > 0. Because
the expression within brackets in equation (16) is negative, λ > 0 necessarily.
Because the agent’s expected utility is concave for the solution of the re-
laxed problem, the first-order condition (14) is sufficient to characterize the
incentive constraints. As a result, the first-order conditions of the original
34Expression (17) can be derived using the fact that:∫X
∫Ω
ω2fW (ω | x, e) dωdG(x) =
∫X
(VarW |X [ω | X = x] + E [ω | X = x]
2)dG(x),
and the fact that, by the law of total variance:∫X
(VarW |X [ω | X = x]
)dG(x) = VarW [ω]−
∫X
(E [ω | X = x]
2)dG(x).
33
problem are the same as those of the relaxed problem, and the solution a∗(x)
and e∗ is characterized by equations (14), (15), and (16).
Proposition 5 provides a summary.
Proposition 5 The solution to the relaxed problem is the solution to the gen-
eral problem if∫
Ωωfee (ω | x, e) dω < 0 (> 0) ∀x > x (∀x < x), in which
case the optimal decision-rule, for a given e∗, is such that:
1. a∗(x) < a(x, e∗) if a(x, e∗) < 0,
2. a∗(x) = a(x, e∗) if a(x, e∗) = 0, &
3. a∗(x) > a(x, e∗) if a(x, e∗) > 0.
When effort is continuous, the decision-maker finds it optimal to make the
final decision excessively sensitive to the information provided by the agent.
4.2 Systematic Biases
To investigate the consequences of systematic biases (i.e., biases that are
independent of ω), I take the framework found in Section 3.1.1 but let DM ’s
payoff function be such that u (a, ω, α, b) = − (αω + b− a)2, with b > 0. In
other words, not only has DM a different sensitivity to the underlying state
of nature relative to A, she is also systematically upward biased. Solving the
associated optimization problem leads to the following proposition.
Proposition 6 Suppose first that c ≤ 2αρ(ρ− ρ
). We then have that
a∗ (x) = aDM (x) = αρx+ b.
34
Suppose instead that c > 2αρ(ρ− ρ
). The optimal decision rule is then
given by:
a∗ (x) =
(c
2(ρ− ρ
))x+ b,
where c
2(ρ−ρ)> αρ.
We therefore have that a∗ (x) > aX (x) for x > x = 0, and conversely.
Proof. See Appendix 1.
Compared to the solution found in Section 3.1.1, the optimal decision rule
when c is sufficiently high is shifted upwards by an amount equal to b. This is
because introducing such a constant bias hurts A by an equal amount under
both signals, so that it is does not impact his incentives to gather signal X.
Figure 3 depicts the optimal solution for the case in which α = 1, where the
dashed line represents the optimal decision rule.
[Insert Figure 3 here]
4.3 Monetary Transfers
The environment considered in this paper is one of non-transferable utility
in which monetary transfers are not feasible. This naturally begs the question
of whether the main insights are preserved if the decision-maker is able to
commit to transfers contingent on the information provided.
First, observe that if the agent is not intrinsically motivated, and trans-
fers contingent on the realization x are possible, the problem boils down to
a standard moral hazard problem. In such a hypothetical environment, the
35
decision-maker would make positive payments when observing moderate real-
izations (i.e., realizations that are more likely under signalX than under signal
X), and negative payments when observing extreme realizations, since these
would be indicative of the agent having “shirked.” In fact, if the agent were
not protected by limited liability, the decision-maker could even, under some
circumstances, approximate the first-best by committing to a very large neg-
ative payment when observing sufficiently extreme realizations (see Mirrlees
[1999]).
However, if monetary transfers are possible and the agent is intrinsically
motivated (and protected by limited liability), rather intuitively, the decision-
maker should use both channels of influence to induce the agent to acquire
the more accurate signal, except in the very specific contexts in which one
instrument clearly dominates the other (say, the agent’s intrinsic motivation
is extremely low).
In such a general environment, we should thus expect large distortions and
low payments when observing extreme realizations, and small distortions and
positive payments when instead observing moderate realizations.
4.4 Participation Constraints
The analysis carried out in this paper has assumed that the agent’s par-
ticipation was guaranteed. This is a natural assumption when agents can-
not escape the consequences of the decision being made (see Hao and Suen
[2009b]). However, extending the model to allow the decision-maker to make
flat payments would, on the one hand, not contradict the view that it is diffi-
36
cult in practice to deposit a contract specifying transfers as a function of the
information provided and, on the other, trivially allow the former to ensure
the agent’s participation. Naturally, this additional expense would imply that
it would be valuable for the decision-maker to induce the acquisition of the
more accurate signal for a smaller range of parameter values. Also, since a flat
payment does not impact the agent’s information acquisition incentives, the
shape of the optimal decision rule characterized in this paper would remain
unchanged.
5 Concluding Remarks
This paper has investigated a situation in which a decision-maker exploits
her agents’ desire to influence the final outcome to motivate information ac-
quisition. The main insight that emerges from the single-agent analysis is that
the decision-maker may have to act almost as if the information provided is
more accurate than it actually is, which inevitably implies adopting excessively
risky behavior. This is the optimal way of making the environment sufficiently
risky that it becomes privately interesting for the agent to acquire information
of good quality.
When several agents must be motivated, the decision-maker makes use of
all the provided information, but grants excessive weight to the agent whose
piece of information is the most extreme. This optimally exploits the fact that
posterior beliefs when observing extreme outcomes are the most sensitive to
precision, so that granting excessive importance to the agent whose piece of
37
piece of information is the most extreme hurts him overwhelmingly in case he
has shirked and his posterior beliefs are still anchored on the prior.
The general insight that emerges is that decision-makers should put slightly
excessive weight on the information yet to be acquired, as opposed to that
available. In practice, organizations may achieve this by disregarding avail-
able and relevant information—by for instance having poor “organizational
memory”—or by appointing less experienced managers; that is, managers who
over-react to the information provided by agents.
38
References
Philippe Aghion and Jean Tirole. Formal and Real Authority in Organizations.
Journal of Political Economy, 105(1):1–29, 1997.
Ricardo Alonso and Niko Matouschek. Optimal Delegation. The Review of
Economic Studies, 75(1):259–293, 2008.
Attila Ambrus and Satoru Takahashi. Multi-sender Cheap Talk with Re-
stricted State Spaces. Theoretical Economics, 3(1):1–27, 2008.
Rossella Argenziano, Sergei Severinov, and Francesco Squintani. Strategic
Information Acquisition and Transmission. Mimeo Essex, 2013.
Chris Argyris and Donald A. Schon. Organizational Learning: A Theory of
Action Perspective. Reading MA: Addison-Wesley. ISBN 0-201-00174-8,
1978.
Mark Armstrong and John Vickers. A Model of Delegated Project Choice.
Econometrica, 78(1):213–244, 2010.
Susan Athey and Jonathan Levin. The value of Information in Monotone
Decision Problems. Mimeo Stanford, 2001.
David Austen-Smith. Strategic Transmission of Costly Information. Econo-
metrica, 62(4):955–963, 1994.
Heski Bar-Isaac. Transparency, Career Concerns, and Incentives for Acquiring
Expertise. The B.E. Journal of Theoretical Economics, 12(1):1–15, 2008.
39
Marco Battaglini. Multiple Referrals and Multidimensional Cheap Talk.
Econometrica, 70(4):1379–1401, 2002.
Marco Battaglini. Policy Advice with Imperfectly Informed Experts. The B.E.
Journal of Theoretical Economics., 4(1):1–34, 2004.
Dirk Bergemann and Juuso Valimaki. Information Acquisition and Efficient
Mechanism Design. Econometrica, 70(3):1007–1033, 2002.
Sourav Bhattacharya and Arijit Mukherjee. Strategic Iinformation Revelation
when Experts Compete to Influence. forthcoming Rand Journal of Eco-
nomics, 2013.
Brandice Canes-Wrone, Michael C. Herron, and Kenneth W. Shotts. Lead-
ership and Pandering: A Theory of Executive Policymaking. American
Journal of Political Science, 45(3):532–550, 2001.
Yeon-Koo Che and Navin Kartik. Opinions as Incentives. Journal of Political
Economy, 117(5):815–860, 2009.
Yeon-Koo Che, Wouter Dessein, and Navin Kartik. Pandering to Persuade.
American Economic Review, 103(1):47–79, 2013.
Leon Yang Chu and David E. M. Sappington. Contracting with Private Knowl-
edge of Signal Quality. The RAND Journal of Economics, 41(2):244–269,
2010.
Vincent Crawford and Joel Sobel. Strategic Information Transmission. Econo-
metrica, 50(6):1431–1451, 1982.
40
Chifeng Dai, Tracy R. Lewis, and Giuseppe Lopomo. Delegating Management
to Experts. The Rand Journal of Economics, 37(3):503–520, 2006.
Joel S. Demski and David E.M. Sappington. Delegated Expertise. Journal of
Accounting Research, 25(1):68–89, 1987.
Wouter Dessein. Authority and Communication in Organizations. Review of
Economic Studies, 69(4):811–838, 2002.
Mathias Dewatripont and Jean Tirole. Advocates. Journal of political econ-
omy, 107(1):1–39, 1999.
Mathias Dewatripont, Ian Jewitt, and Jean Tirole. The Economics of Ca-
reer Concerns, Part I: Comparing Information Structures. The Review of
Economic Studies, 66(1):183–198, 1999.
Harry Di Pei. Communication with Endogenous Information Acquisition.
Mimeo MIT, 2013.
Robert Dur and Otto H. Swank. Producing and Manipulating Information.
The Economic Journal, 115(500):185–199, 2005.
Martha S. Feldman and James G. March. Information in Organizations as
Signal and Symbol. Administrative Science Quarterly, 26(2):171–186, 1981.
Justin Fox and Richard Van Weelden. Costly Transparency. Journal of Public
Economics, 96(1-2):142–150, 2012.
Matthew Gentzkow and Emir Kamenica. Competition in Persuasion. National
Bureau of Economic Research No. w17436, 2011.
41
Dino Gerardi and Leeat Yariv. Information Acquisition in Committees. Games
and Economic Behavior, 62(2):436–459, 2008a.
Dino Gerardi and Leeat Yariv. Costly Expertise. American Economic Review:
Papers and Proceedings, 98(2):187–193, 2008b.
Alex Gershkov and Balazs Szentes. Optimal Voting Schemes with Costly
Information Acquisition. The Journal of Economic Theory, 144(1):36–68,
2009.
Thomas Gilligan and Keith Krehbiel. Collective Decision-Making and Stand-
ing Committees: An Informational Rationale for Restrictive Amendment
Procedures. Journal of Law, Economics and Organization, 3(2):345–350,
1987.
Jacob Glazer and Ariel Rubinstein. On Optimal Rules of Persuasion. Econo-
metrica, 72(6):1715–1736, 2004.
Jacob Glazer and Ariel Rubinstein. A Study in the Pragmatics of Persuasion:
A game Theoretical Approach. Theoretical Economics, 1(4):395–410, 2006.
Maria Goltsman, Johannes Horner, Gregory Pavlov, and Francesco Squintani.
Mediation, Arbitration and Negotiation. Journal of Economic Theory, 144
(4):1397–1420, 2009.
Denis Gromb and David Martimort. Collusion and the Organization of Dele-
gated Expertise. Journal of Economic Theory, 137(1):271–299, 2007.
42
Sanford J. Grossman. The Informational Role of Warranties and Private Dis-
closure About product quality. Journal of Law and Economics, 24(3):461–
483, 1981.
Sanford J. Grossman and Oliver D. Hart. An Analysis of the Principal-Agent
Model. Econometrica, 51(1):7–45, 1983.
Sanford J. Grossman and Oliver D. Hart. The Costs and Benefits of Owner-
ship: A Theory of Vertical and Lateral Integration. The Journal of Political
Economy, 94(4):691–719, 1986.
Faruk Gul and Wolfgang Pesendorfer. The War of Information. The Review
of Economic Studies, 79(2):707–734, 2012.
Li Hao and Wing Suen. Decision-making in committees. Mimeo, Toronto,
2009a.
Li Hao and Wing Suen. Viewpoint: Decision-making in committees. Cana-
dian Journal of Economics/Revue canadienne d’economique, 42(2):359–392,
2009b.
Oliver Hart and John Moore. Property Rights and the Nature of the Firm.
Journal of political economy, 98:1119–1158, 1990.
Bengt Holmstrom. On Incentives and Control in Organizations. Ph.D. Disser-
tation, Stanford, 1977.
Bengt Holmstrom. Moral Hazard and Observability. The Bell Journal of
Economics, 10(1):74–91, 1979.
43
Bengt. Holmstrom. Moral Hazard in Teams. The Bell Journal of Economics,
13(2):324–340, 1982.
Bengt Holmstrom. Managerial Incentives Problems: a Dynamic Perspective.
The Review of Economic Studies, 66(1):169–182, 1999.
Justin P. Johnson and David P. Myatt. On the Simple Economics of Adver-
tising, Marketing, and Product Design. American Economic Review, 96(3):
756–784, 2006.
Emir Kamenica and Matthew Gentzkow. Bayesian Persuasion. American
Economic Review, 101(6):2590–2615, 2011.
Navin Kartik. Strategic Communication with Lying Costs. Review of Economic
Studies, 76(4):1359–1395, 2009.
Son Ku Kim. Efficiency of an Information System in an Agency Model. Econo-
metrica: Journal of the Econometric Society, 63(1):89–102, 1995.
Vijay Krishna and John Morgan. A Model of Expertise. The Quarterly Journal
of Economics, 116(2):747–775, 2001.
Vijay Krishna and John Morgan. Contracting for Information under Imperfect
Commitment. The Rand Journal of Economics, 39(4):905–925, 2008.
Augustin Landier, David Sraer, and David Thesmar. Optimal dissent in orga-
nizations. The Review of Economic Studies, 76,:761–794, 2009.
Benjamin Lester, Nicola Persico, and Ludo Visschers. Information Acquisition
44
and the Exclusion of Evidence in Trials. Journal of Law, Economics, and
Organization, 28(1):163–182, 2012.
Tracy R. Lewis and David E.M. Sappington. Information Management in
Incentive Problems. Journal of Political Economy, 105(4):796–821, 1997.
Hao Li. A Theory of Conservatism. Journal of Political Economy, 109(3):617
636, 2001.
James M. Malcomson. Principal and Expert Agent. The BE Journal of The-
oretical Economics, 9(1):1–36, 2009.
James G. March. Model Bias in Social Action. Review of Educational Research,
42(4):51–91, 1972.
David Martimort and Aggey Semenov. The Informational Effects of Competi-
tion and Collusion in Legislative Politics. Journal of Public Economics, 92
(7):1541–1563, 2008.
Nahum D. Melumad and Toshiyuki Shibano. Communication in Settings with
no Transfers. The RAND Journal of Economics, 22(2):173–1982, 1991.
Todd T. Milbourn, Richard L. Shockley, and Anjan V. Thakor. Managerial
Career Concerns and Investments in Information. RAND Journal of Eco-
nomics, 32(2):334–351, 2001.
Paul Milgrom. What the Seller Won’t Tell You: Persuasion and Disclosure in
Markets. The Journal of Economic Perspectives, 22(2):115–132, 2008.
45
Paul Milgrom and John Roberts. Relying on the Information of Interested
Parties. The RAND Journal of Economics, 17(1):18–32, 1986.
Paul R Milgrom. Good News and Bad News: Representation Theorems and
Applications. The Bell Journal of Economics, 12(2):380–391, 1981.
James A. Mirrlees. The Optimal Structure of Incentives and Authority within
an Organization. The Bell Journal of Economics, 7(1):105–131, 1976.
James A. Mirrlees. The Theory of Moral Hazard and Unobservable Behaviour:
Part I. The Review of Economic Studies, 66(1):3–21, 1999.
Richard E. Neustadt and Ernest R. May. Thinking in Time, The Use of History
for Decision Makers. Free Press, 1986.
Volker Nocke and Michael D. Whinston. Merger Policy with Merger Choice.
American Economic Review, 103(2):1006–1033, 2013.
Kent Osband. Optimal Forecasting Incentives. Journal of Political Economy,
97(5):1091–1112, 1989.
Marco Ottaviani and Peter Norman Sorensen. Professional Advice. Journal
of Economic Theory, 126(1):120–142, 2006.
Nicola Persico. Information Acquisition in Auctions. Econometrica, 68(1):
135–148, 2000.
Nicola Persico. Committee Design with Endogenous Information. The Review
of Economic Studies, 71(1):165–191, 2004.
46
Andrea Prat. The Wrong Kind of Transparency. 95(3):862–877, 2005. The
American Economic Review.
Canice Prendergast. The Motivation and Bias of Bureaucrats. The American
Economic Review, 97(1):180–196, 2007.
Canice Prendergast. Intrinsic Motivation and Incentives. The American Eco-
nomic Review, 98(2):201–205, 2008.
Canice Prendergast and Lars Stole. Impetuous Youngsters and Jaded Old-
Timers: Acquiring a Reputation for Learning. Journal of Political Economy,
104(6):1105–1134, 1996.
Paul Sabatier. The Acquisition and Utilization of Technical Information by
Administrative Agencies. Administrative Science Quarterly, 23(3):396–417,
1978.
David S. Scharfstein and Jeremy C. Stein. Herd Behavior and Investment. The
American Economic Review, 80(3):465–479, 1990.
Xianwen Shi. Contracts with Endogenous Information. Games and Economic
Behavior, 74(2):666 686, 2012.
Joel Sobel. Giving and Receiving Advice, 2010. Econometric Society 10th
World Congress.
Dezso Szalay. The Economics of Clear Advice and Extreme Options. The
Review of Economic Studies, 72(4):1173–1198, 2005.
47
Dezso Szalay. Contracts with Endogenous Information. Games and Economic
Behavior, 65(2):586–625, 2009.
Eric Van den Steen. Disagreement and the Allocation of control. The Journal
of Law, Economics Organization, 26(2):385–426, 2008.
James P. Walsh and Gerardo Rivera Ungson. Organizational memory. The
Academy of Management Review, 16(1):51–91, 1991.
48
Appendix 1
Proof of Proposition 1
Since I maximize pointwise, the problem boils down to finding, for ∀x, an
action a (x) that maximizes the objective function:
gX (x)
∫Ωu (a (x) , ω) dFW |X (ω | x)︸ ︷︷ ︸
I
+ γ
gX (x)
∫Ωu (a (x) , ω) dFW |X (ω | x)︸ ︷︷ ︸
II
− gX (x)
∫Ωu (a (x) , ω) dFW |X (ω | x)︸ ︷︷ ︸
III
,
where the associated first-order condition is given by:
gX(x)
∫Ω
∂u (a, ω)
∂adFW |X (ω | x)︸ ︷︷ ︸
IV
+ γ
gX(x)
∫Ω
∂u (a, ω)
∂adFW |X (ω | x)︸ ︷︷ ︸
V
−(gX(x)
) ∫Ω
∂u (a, ω)
∂adFW |X (ω | x)︸ ︷︷ ︸
VI
= 0.
Consider first a given x ∈ K+. To see that a∗(x) > a(x) necessarily, note
that, on the one hand, IV=V=0 at a = a(x), while, on the other, -VI> 0
(since a > a(x)), implying that the first-order derivative is strictly positive
at a = a(x), yielding the strict inequality. Divide the first-order condition
by g(x), and relabel functions IV, VI and V as, respectively, ˜IV, VI and
V. Further, interpret ˜IV+V as the marginal cost of increasing a starting at
a = a(x), and −VI as the marginal benefit. The greater the likelihood ratio
49
gX(x)
gX(x)is, the higher the function −VI is in [a(x),+∞), so that, all else equal,
the higher a∗(x) is. Similarly, because payoff functions are assumed concave,
the greater the distance between a(x) and a(x) is, the higher the function −VI
is in [a(x),+∞), so that, all else equal, the higher a∗(x) is.
The proof for the case in which x ∈ K− is symmetric and thus left out.
The first statement in Corollary 1 immediately follows. Moreover, if the
schedule a(x) is a steeper rotation of the schedule a(x), then, by definition,
| a(x) − a(x) | increases as | x − x | increases. Suppose x ∈ K+, i.e., suppose
x > x. If in addition to the fact that | a(x)−a(x) | increases as x− x increases,
the likelihood ratiogX(x)
gX(x)also increases as x− x increases, then the higher x is
the higher the function −VI is in [a(x),+∞), so that the greater a∗(x)− a(x)
is.
Proofs of Proposition 3 and Proposition 6
Suppose u (a, ω, α, b) = − (αω + b− a)2 and v (a, ω) = − (ω − a)2. DM
chooses schedule of decisions a (x) to maximize her expected payoff:
−∫ ∞−∞
∫ ∞−∞
(αω + b− a(x))2 dGX|W (x | ω) dFW (ω) , (19)
subject to inducing A into acquiring X:
−∫ ∞−∞
∫ ∞−∞
(ω − a(x))2 dGX|W (x | ω) dFW (ω)−c ≥ −∫ ∞−∞
∫ ∞−∞
(ω − a(x))2 dGX|W (x | ω) dFW (ω) .
(20)
50
Let γ denote the Lagrange multiplier associated to this constraint. Since
gX(x) = gX(x), maximizing pointwise yields the first-order condition:
a (x) = αρx+ b+ γ(ρ− ρ
)x.
Plugging the formula for a (x) back into (20) and rearranging yields:
γ =c− 2ρα
(ρ− ρ
)2(ρ− ρ
)2 . (21)
We can infer from this last equation that (20) binds if and only if c >
2ρα(ρ− ρ
). Substituting the formula for γ into the first-order condition and
rearranging concludes the proof (with b = 0 for Proposition 3).
Appendix 2
Proof of Proposition 4
The first-order condition associated with the decision-maker’s problem is:
gX(x1)gX(x2) (1 + λ+ γ)
∫Ω
∂u (a, ω)
∂adFW |X,X(ω | x1, x2)︸ ︷︷ ︸
I
−
gX(x1)gX(x2)λ
∫Ω
∂u (a, ω)
∂adFW |X,X(ω | x1, x2)︸ ︷︷ ︸
II
+ gX(x1)gX(x2)γ
∫Ω
∂u (a, ω)
∂adFW |X,X(ω | x1, x2)︸ ︷︷ ︸
III
= 0.
The proof for the cases in which x1 and x2 are both low or high is almost
identical to the analysis found in the proof of Proposition 1, and thus left out
for the sake of brevity. Suppose instead that x1 and x2 are such that agent 1’s
51
posterior belief is high while agent 2’s is low. Suppose further that x1 is more
extreme than x2, in the sense that | x1 − x |>| x2 − x |.
I first establish that the inequality | x1−x |>| x2−x | implies the inequality:
| a(x1, x2 | X,X)− a(x1, x2 | X,X) |>| a(x1, x2 | X,X)− a(x1, x2 | X,X) | .
(22)
To see this, suppose first that x1, x2 > x, implying that x2(x1) > x1(x2). It
then follows that x1− x1(x2) > x2− x2(x1) since x1−x2 > 0 > x1(x2)− x2(x1).
Suppose now that x1 > x > x2. We then have that x2(x1) > 0 > x1(x2), from
which it again follows that | x1 − x1(x2) |>| xx − x2(x1) |.35 Finally, the
inequality x1 − x1(x2) > x2 − x2(x1) implies (22) by virtue of (i) the fact that
the derivative of aX1,X2 (x1, x2) with respect to x1 is independent of x2, and
that with respect to x2 is independent of x1, (ii) the fact that gross preferences
are identical, and (iii) the fact that the signals at the disposal of A1 and A2
have identical conditional densities.
Let us interpret I as the marginal cost of moving away from a =
aX,X (x1, x2), and −(II+III) as the marginal benefit. Suppose to begin with
that λ ≥ γ. The function II is negative at a = aX,X(x1, x2) since x1 is high.
Conversely, because x2 is low, it follows that the function III is positive. The
function II+III is however negative at a = aX,X (x1, x2) because of the fact
that:
1. payoff functions being concave, inequality (22) implies that |∫Ω∂u(a,ω)∂a
dFW |X,X(ω | x1, x2) |> |∫
Ω∂u(a,ω)∂a
dFW |X,X(ω | x1, x2) | at
35It could also be the case that agent 1’s posterior belief is high while agent 2’s is low forx2 < x1 < 0. However, this would violate the condition that | x1 |>| x2 |.
52
a = aX,X (x1, x2),
2.gX(x1)
gX(x1)>
gX(x1)
gX(x1)since | x1 − x |>| x2 − x |, and
3. λ ≥ γ.
Finally, since I is equal to zero at aX,X (x1, x2), we have that the first-order
derivative at a = aX,X (x1, x2) is positive, and thus that it is strictly optimal
to set a∗ (x1, x2) > aX,X (x1, x2).
The argument so far relied on the assumption that λ ≥ γ. If the reverse
is true, it is clear that everything is as before as long as γ is not too large.
If for a given x1, γ is sufficiently large that II + III > 0 then it is optimal
to introduce a downward distortion. There thus exists a nonnegative function
r (λ, γ), such that it is optimal to set a∗ (x1, x2) > aX,X (x1, x2) if and only if
| x1− x |> r (λ, γ) | x2− x |, where it is immediate that ∂r(λ,γ)∂λ
< 0, ∂r(λ,γ)∂γ
> 0,
and where r (λ, γ) = 1 if γ = λ.
The proof for the case in which the agent 1’s posterior belief is high while
agent 2’s is low, but x2 is more extreme than x1 and x2 < x1 < 0 is symmetric
to that solved above in which 0 < x2 < x1: it is then optimal to introduce
a downward distortion (if and only if λ is not too large). Finally, the proof
for the case in which agent 1’s posterior belief is low while agent 2’s is high is
symmetric and thus left out.
To conclude, note that because (i) agents have identical payoff functions
and access to an identical set of signals, and (ii) both agents’ incentive com-
patibility constraint is binding, it must be the case that the function a∗ (x1, x2)
is such that λ = γ at the optimum, so that r (λ, γ) = 1. To see this, suppose
53
λ > γ. Then, the left-hand side of both agents’ incentive-compatibility con-
straint is equal one to another. However, the right-hand side cannot be equal
one to another: the right-hand side of (9) is lower than that of (10), which is
a contradiction with the assumption that both constraints are binding.
54
a(x)
a(x)
a∗(x)
X
R
Figure 1: The Optimal Solution when α = 1
55
X
X
x1(x2)
x−12 (x2)
A
B
C
D
Figure 2: The Classification of the Signals’ Realizations
56
aA(x)
aA(x)
aDM(x)
a∗(x)
X
R
Figure 3: The Optimal Decision Rule when α = 1 and b > 0
57