7/28/2019 Confid Rep Theorem
1/51
A Confidence Representation Theorem for
Ambiguous Sources with Applications toFinancial Markets and Trade Algorithm
Godfrey Cadogan
Proceedings of Foundations and Applications of Utility, Risk and Decision Theory. Forthcoming
May 14, 2012
Abstract
This paper extends the solution space for decision theory by introducing a behavioural oper-ator that (1) transforms probability domains, and (2) generates sample paths for confidencefrom catalytic fuzzy or ambiguous sources. First, we prove that average sample paths forconfidence/sentiment, generated from within and across source sets, differ. So conjugate pri-ors should be used to mitigate the difference. Second, we identify loss aversion as the sourceof Langevin type friction that explains the popularity of Ornstein-Uhlenbeck processes formodeling mean reversion of sample paths for behaviour. However, in large markets, ergodicconfidence levels, imbued by Lichtenstein and Slovic (1973) and Yaari (1987) type preferencereversal operations, support our large deviation theory of bubbles, crashes and chaos in be-havioral dynamical systems. Third, simulation of the model confirms that the distribution ofpriors, on Gilboa and Schmeidler (1989) source sets, controls confidence momentum and termstructure of fields of confidence. For example, it explains the asset pricing anomaly of sensi-tivity of momentum trading strategies to starting dates in Moskowitz et al. (2012). Fourth, weprovide several applications including but not limited to a sentiment based computer trading al-gorithm. For instance, our computer generated field of confidence mimics trends in CBOE VIXdaily sentiment index, and survey driven Gallup Daily Economic Confidence Index (GDECI)sounding in Tversky and Wakker (1995) type impact events. We show how GDECI splits VIXinto source sets that depict term structures of confidence for relative hope and fear. And iden-tify a VIX source set arbitrage strategy by classifying each set according to its risk attitude,and then use the confidence beta of each set to explain differences in price moves.
KEYWORDS: confidence; chaos; ambiguity; momentum; impact events; ergodic theoryJEL Classification Codes: C62, C65, D81, G00
I thank Zack Bikus for providing unpublished data on Gallup Economic Confidence Index, Peter Wakker,Jan Kmenta and Oliver Martin for their comments and encouragement; and Jose H. Faro, and Hykel Hosni forbringing my attention to their work which improved the presentation within. Any errors which may remainare my own. Research support of the Institute for Innovation and Technology Management is gratefullyacknowledged.
Corresponding address: Institute for Innovation and Technology Management, Ted Rogers School ofManagement, Ryerson University, 575 Bay, Toronto, ONM5G 2C5; e-mail: [email protected];Tel: (786) 329-5469.
1
http://[email protected]/http://[email protected]/http://[email protected]/7/28/2019 Confid Rep Theorem
2/51
Contents
1 Introduction 2
2 The Model 4
2.1 Stochastic confidence kernel induced by random domains . . . . . . . . . . . 62.1.1 Random set topology for ambiguity . . . . . . . . . . . . . . . . . . . 62.2 Sample function from field of confidence . . . . . . . . . . . . . . . . . . . . 82.3 Average confidence over Gilboa-Schmeilder convex priors . . . . . . . . . . . 12
3 Applications 143.1 Construction of confidence field . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Confidence preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3 Model Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.4 A confidence based program trading algorithm . . . . . . . . . . . . . . . . . 223.5 Market Sentiment: hope, fear, bubbles, crashes, and chaos . . . . . . . . . . 26
3.6 Confidence source sets and VIX induced confidence beta . . . . . . . . . . . 323.6.1 Confidence beta arbitrage . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Conclusion 37
A Appendix of Proofs 37A.1 Proof of Theorem 2.3Confidence Representation . . . . . . . . . . . . . . . 37A.2 Proof of Lemma 3.3Ergodic Confidence . . . . . . . . . . . . . . . . . . . . 40A.3 Proof of Proposition 3.4Large Market Bubble/Crash . . . . . . . . . . . . . 43
References 45
List of Figures
1 Confidence Trajectory From Loss Over Gain Domain . . . . . . . . . . . . . 112 Confidence Trajectory From Gain Over Loss Domain . . . . . . . . . . . . . 113 Example of basis field orientation . . . . . . . . . . . . . . . . . . . . . . . . 174 Confidence Motivated Psuedo Demand . . . . . . . . . . . . . . . . . . . . . 235 Confidence Motivated Psuedo Supply . . . . . . . . . . . . . . . . . . . . . . 236 Confidence Motivated Psuedo Supply and Demand Equilibria . . . . . . . . 237 Gallup Monthly Economic Confidence Index from Survey Sampling . . . . . 23
8 CBOE VIX DailyMarket Uncertainty . . . . . . . . . . . . . . . . . . . . . 339 Gallup Daily Economic Confidence Index vs. VIX: Source Set (A B) . . . 3310 Gallup Daily Economic Confidence Index vs. VIX: Source Set A . . . . . . . 3311 Gallup Daily Economic Confidence Index vs. VIX: Source Set B . . . . . . . 33
2
7/28/2019 Confid Rep Theorem
3/51
1 Introduction
This paper introduces a kernel operator that (1) transforms probability domains, and (2)
generates sample paths for confidence representation initiated by convex random source sets
of priors induced by ambiguity. It also provides several applications of the results. The
operator is motivated by compensating probability factors generated by deviations of a sub-
jective probability measure from an equivalent martingale measure. See Harrison and Kreps
(1979) and Sundaram (1997) for equivalent martingale measures, and (Fellner, 1961, pg. 672)
for compensating probabilities. The magnitude of deviations are controlled by the curva-
ture of probability weighting functions. See Wu and Gonzalez (1996); Gonzalez and Wu
(1999). Our behavioural theory extends Tversky and Wakker (1995) who characterized the
shape of probability weighting functions as being dispositive of the impact of an event. For
example, they considered the steepness of a classic inverse S-shaped probability weighting
function (PWF) near its endpoints. And introduced the concept of bounded subadditivity
to explain the phenomenon of an impact event in which a subject transforms impossibility
into possibility, and possibility into certainty, in regions near the extremes of the PWF.
That event makes a possibility more or less likely in the middle portion of the PWF.
This paper introduces an operator or kernel function (that depends on a PWF) that shows
how Tversky-Wakker subjects transforms probability domains. For instance, it shows how
a subject transforms loss probability domain into hope of gain to explain risk seeking be-
haviour. And how gain probability domain is transformed into fear of loss to explain risk
aversion. It extends the literature by showing how the operator also generates sample paths
for confidence. In particular, in Lemma 2.4 below, we show how loss aversion is akin to a
Langevin type frictional force that induce mean reversion in behaviour typically modeled by
Ornstein-Uhlenbeck processes.
More recent, Abdellaoui et al. (2011) introduced a model in which they treated sources of
3
7/28/2019 Confid Rep Theorem
4/51
uncertainty as an algebra of events. In particular, they posited: The function wS , carrying
subjective probabilities to decision weights, is called the source function. And they state
unequivocally that source functions represent deviations from rational behavior. They also
report that a rich variety of ambiguous attitudes were found between and within the person.
Almost all of those results, or a reasonable facsimile of them, are predicted by our model.
Here, the source of uncertainty is reflected by a convex random set of prior probabilities for
unknown states. Theoretically, these random source sets are comprised of elementary events.
So they are consistent with sources of uncertainty as algebra of events. We address that
issue in Lemma 2.2 which establishes a nexus between algebra of events, ambiguity, and our
convex random source set of priors. In subsection 3.2 in this paper, we introduce Lemma
3.2 which shows how our confidence kernel, induced by ambiguous random set or priors,
extends source functions to decision weights. Specifically, our confidence kernel is based
on the area under the source function or probability weighting function/curve adjusted
for loss gain probability spread relative to an equivalent martingale measure. So it naturally
extends the source function approach to ambiguity. By contrast, extant confidence indexes
are survey driven, ie, Shiller (2000) and Manski (2004); or derived from comparatively ad hoc
computer driven principal components analysis, ie, Baker and Wurgler (2007). To the best
of our knowledge the confidence operator, and sample path representation for confidence
introduced in this paper are new1. In Corollary 2.7 we also make the case for the use
of conjugate priors as a mechanism for reducing discrepancy in the Gilboa and Schmeidler
(1989) set of priors. In the sequel we provide several applications for our theory, and conduct
a simple weak hypothesis test which upheld the source set hypothesis. For example, in
Proposition 3.5 on page 31 we show how our theory supports chaos in dynamical systems1We also note that the approach taken in this paper is distinguished from extant models of ambiguity
introduced by the Italian school or otherwise. See eg, Klibanoff et al. (2005) (smooth ambiguity); Maccheroniet al. (2006) (variational model of that captures ambiguity); Cerreia-Vioglio et al. (2011) (uncertainty aversepreferences); Cerreia-Vioglio et al. (2011) (rational model of ambiguity without certainty independence anduncertainty aversion); Chateauneuf and Faro (2009) (operator representation of confidence preferences).
4
7/28/2019 Confid Rep Theorem
5/51
induced by ergodic confidence. This implies that such systems are unpredictableat least
over time.
The rest of the paper proceeds as follows. In section 2 we introduce our model. In subsec-
tion 3.1 we use a simple heuristic example to explain our theory, and thereafter provide sev-
eral applications ranging from constructing confidence preferences, simulation, programmed
trading, the role of confidence in bubbles, crashes, chaos and volatility in financial markets.
We conclude with perspectives for further research in section 4.
2 The Model
Let p be a fixed point probability that separates loss and gain domains; and let P [0, p]
and Pg (p, 1] be loss and gain probability domains as indicated. So that the entire domain
is P = P Pg. Let w(p) be a probability weighting function (PWF), and p be an equivalent
martingale measure. The confidence index from loss to gain domain is a real valued mapping
defined by
K : P Pg [1, 1] (1)
K(p, pg) =
pgp
[w(p) p]dp =
pgp
w(p)dp 1
2(p2g p
2 ), (p, pg) P Pg (2)
We note that that kernel can be transformed even further so that it is singular at the fixed
point p as follows:
K(p, pg) =K(p, pg)
pg p=
1
pg p pg
p
w(p)dp 1
2(pg +p) (3)
The kernel accommodates any Lebesgue integrable PWF compared to any linear probability
scheme. See e.g., Prelec (1998) and Luce (2001) for axioms on PWF, and Machina (1982)
for linear probability schemes. Evidently, K is an averaging operator induced by K. The
5
7/28/2019 Confid Rep Theorem
6/51
estimation characteristics of these kernels are outside the scope of this paper. The interested
reader is referred to the exposition in Stein (2010). Let T be a partially ordered index set
on probability domains, and T and Tg be subsets ofT for indexed loss and indexed gain
probabilities, respectively. So that
T= T Tg (4)
For example, for T and g Tg if = 1, . . . , m; g = 1, . . . , r the index T gives rise
to a m r matrix operator K = [K(p, pg)]. The adjoint matrix K = [K(pg, p)] =
[K(p, pg)]T. So K transforms gain domain into loss domainimplying fear of loss, or risk
aversion, for prior probability p. While K is an Euclidean motion that transforms loss
domain into hope of gain from risk seeking for prior gain probability pg. Thus, K captures
Yaari (1987) reversal of the roles of probabilities and payments, ie, the preference reversal
phenomenon in gambles first reported by Lichtenstein and Slovic (1973). Moreover, K and
K are generated (in part) by prior probability beliefs consistent with Gilboa and Schmeidler
(1989). IfVg and V are gain loss domains, respectively, then: K : Vg V and K : V Vg.
Let f = {(x1, p1), . . . , (xn, pn)} be a lottery in which outcome x has associated probability p
of occurrence and n = m + r. By rank ordering outcomes relative to a reference point, the
probability distribution (p1, . . . , pn) is ineluctably separated by p. Thus, our index allows
us to produce a numerical score for a subjects confidence transformation of loss and or gain
domains accordingly. In the sequel we make the following:
Assumption 2.1. Subjects do not update their prior beliefs over probability domains.
Remark 2.1. We make this arguably plausible assumption in order not to overload the paper
with the effects of posterior probabilities as our subjects obtain more information. Thiswould entail the introduction of a time domain for filtration of information. We leave that
for another day. Thus, our results are pseudo-Bayes.
6
7/28/2019 Confid Rep Theorem
7/51
2.1 Stochastic confidence kernel induced by random domains
We now introduce the following:
Definition 2.1 (Stochastic kernel). Let M = M Mg be a space of loss gain probability
measures; S be the -field of Borel subsets ofM, and be a measure on M. The sub- fields
S|M , S|Mg are the restrictions to loss and gain domains. A stochastic kernel K concentrated
on M is a real [complex] valued mapping K : M M Y such that for a point p M and
a set B S , K(p, B) has the properties: (i) for p fixed it is a distribution over B; and (ii)
for B fixed it is a Baire function in p.
Remark 2.2. This definition is adapted from (Feller, 1970, pg. 221). Roughly, a Baire func-
tion is one that is either continuous or the pointwise limit of a sequence of functions. See
(McShane and Bott, 1959, pp. 147-148).
Definition 2.2 (Stochastic density kernel). Let k(p, y) be a stochastic density kernel. Then
with respect to an arbitrary measure (dy) we write
K(p, B) =
B
k(p, y)(dy)
2.1.1 Random set topology for ambiguity
Like (Tversky and Wakker, 1995, pg. 1258), we assume that a state occurs but a subject
is uncertain about which one. For example, given a sample space or set of states of nature
, if Bg S|Mg , and B = {| p : M}, then K(p(), Bg) is a stochastic kernel
controlled by the random set of priors B, i.e. a family of random measures p() with
initial distribution p(). This is functionally equivalent to Chateaunerf and Faro (2012)
7
7/28/2019 Confid Rep Theorem
8/51
fuzzy set of priors. See Zadeh (1968). Given a function f (not the lottery above) in the
domain D(K) of K, we have the random integral equation
C
p
(pg, ) = C
p()
(pg) = (Kf)(p, ) =
Bg K(p, , y)f(y)(dy), pg Bg (5)
Thus Cp(pg, ) is the transformation off D(K) into a distribution or trajectory over Bg
anchored at initial value p(). It is adapted to S. In keeping with Abdellaoui et al. (2011)
source function theory2, we state the following
Lemma 2.2 (Algebra of convex random set of priors).
M is an algebra of the convex random set of priors B such that if B,k, k = 1, . . . , m is a
finite cover for M, then
M = m
k=1
B,k
B (6)
Proof. See (Gikhman and Skorokhod, 1969, pp. 41-42).
The intuition of the lemma, in which p B p M, is as follows. Each [unobserved]
covering set B,k contains an elementary event of interest to our subject. In effect, the
covering sets are like the balls that cover Ellsberg (1961) urn. In the simplest case, if the
covering sets B,k were disjoint, then at most [s]he could surmise the existence of an index
k0, say, such that p B B,k0. In which case, we have a [random] prior probability
p0 = {B B,k0}. However, [s]he does not know k0 so [s]he is faced with ambiguity.
If the covering sets are not disjoint, then p lies in several covering sets. So we have a
subset of unknown indexes k0, k1, . . . , km for the possible covering sets in which p lies. That
is, p0,1,...,m = {(
m
j=0
B,kj )
B}. In that case, a subject endowed with Machina and
Schmeidler (1992) probabilistic sophistication may opt to use entropy methods to discern a
prior distribution.
2Fedel et al. (2011) also introduced algebras of events to characterize probabilistic reasoning. But theircontext is different from Abdellaoui et al. (2011).
8
7/28/2019 Confid Rep Theorem
9/51
2.2 Sample function from field of confidence
With the foregoing definition of ambiguity in mind, we proceed as follows.
Definition 2.3 (Random field of confidence and term structure).
Let (, F, P) be a probability space, M = M Mg be a space of loss gain probability
measures, and K : M M Y be a kernel function with range in Y. Thus we write
(P, ) P() M. Let Y be the field of Borel subsets of Y. By Definition 2.1 K is a F
adapted Borel/Baire function for every point (p(), Bg). For Bg Mg, and a convex set of
prior loss probabilities B = {| p : M}, define the confidence function, with respect
to a measure on M
Cp (pg, ) = (Kf)(p, ) =
pgBg
K((p, ), pg)f(pg)(dpg), g {1, . . . , r} (7)
For some set E Y we have Cp (pg, ) E. For g = 1, . . . , r let 1,...,r be a measure on Yr
such that the joint distribution on the probability measure space (Yr,Yr, 1,...,r ) is given by
P{; Cp (p1, ) E , . . . , C p (pr, ) E} = 1,...,r(E
r) (8)
Then (, F, P), {Cp (pg, )} is a random field representation of the measure 1,...,r. Moreover,
Cp (pg, ), = 1, . . . , m is a term structure field relative to the term pg. The same
definitions hold for the kernel function K = KT, B M, the convex set of prior gain
probabilities Bg = {| pg : Mg}, and h D(K) for
Cpgg
(p, ) = (Kh)(pg, ) =
pB
K(p, (pg, ))h(p)(dp), {1, . . . , m} (9)
Whereupon Cpgg (p, ), g = 1, . . . , r is a term structure field relative to the term p.
9
7/28/2019 Confid Rep Theorem
10/51
Remark 2.3. This definition is motivated by (Gikhman and Skorokhod, 1969, pp. 107-108).
Equations (7) and (9) are random integral equations of Volterra type of the first kind with
random initial value. See (Bharucha-Reid, 1972, pp. 135, 140, 148).
In Definition 2.3 above, Cp (pg, ) is a sample function3 from the space of confidence trajec-
tories over gain probability domainsfor given prior loss probability p in the random source
set B(). That is, Cp (Bg, ), p B() is a set function defined over gain probability
domain. So confidence is sensitive to initial conditions. It is a pseudo-advection equation,
i.e. it transports p, which suggests, and which we prove in the sequel, that under suitable
conditions it could generate a chaotic dynamical system. We state the following
Theorem 2.3 (Confidence Representation). LetM be the space of probability measures; S
be the -field of Borel subsets of M ; be a measure on M; and I() = [p(), p] M be a
random interval domain induced by a random prior probability p() attributable to ambiguity
aversion. Let (, F, P) be a probability space, and
C(p, ) = (Kf)(p, ) =
I()
K(p, y)f(y)(dy)
be a mean-square continuous (in p) sample function from a random field of confidence in
Hilbert space L2(M,S, ). The confidence kernel K(p, y) is defined on I() I()/{p()}.
Let
(p1, p2) = E[C(p1, )C(p2, )| F] =
M
C(p1, )C(p2, )dP() a.s. P
be the covariance function ofC for arbitrary points(p1, p2) I()I(), whereC is the com-
plex conjugate in the event C is a complex function for inner products on L2. Let{n()}n=1
be an orthogonal sequence of F-measureable random variables such that E[|n()|2| F] = n
3Chateauneuf and Faro (2009) introduced a confidence function that is different from ours.
10
7/28/2019 Confid Rep Theorem
11/51
wheren is an eigenvalue of (p1, p2), with corresponding eigenfunction (p). Then we have
C(p, ) =
n=1
n()(p, ) a.s P (10)
Proof. See subsection A.1.
Remark 2.4. A similar representation is given in (Bharucha-Reid, 1972, pp. 145-146). Exceptthere, the initial value for the integral domain is not random, and the covariance function andeigenvalues n pertain to f in the domain D(K), in the representation C(p, ) = (Kf)(p, )in which case
C(p, ) =
n=1nn()n(p) (11)
and f has a similar representation for some function n(p) D(K).
Remark 2.5. As a practical matter confidence indexes are reported in space and time.Roughly, let (t,p,y) be the transition probability density for moving from a prior prob-ability p to a gain probability y in time t. Let Ut be an operator that characterizes thestate of confidence t periods after some initial state, say at time t0 = 0. And defineC(t,p,) = (UtC)(p, ) =
yI
(t,p,y)C(y, )dy. By the same token, the state of con-fidence t periods after it was at state s is given by
C(t + s,p,) = Ut(UsC)(p, )) = (Ut+sC)(p, )
This gives rise to a semi-group operator structure Ut+s = UtUs for confidence. Moreover,by imposing certain conditions on the evolution of (t,p,y), we can construct a behaviouralMarkov random field over time and space. However, the explicit inclusion of time dimensionin the ergodic nature of confidencedescribed immediately below and proved later in Lemma3.3, infrawould take us too far afield. See e.g. (Hille and Phillips, 1957, Ch. XVIII). Thus,
we limit our analysis to probability domains only.
Figure 1 on page 11 depicts a sample function from the random field of confidence over
the random interval I() = [p(), pg]. The initial confidence level Cp (p
Lg ) over gain domains
starts at the lowest level for gain probability pLg moving from left to right according to the
[forward] operator K. In effect, we depict a set function Cp (Bg), p B. Notice the
11
7/28/2019 Confid Rep Theorem
12/51
Figure 1: Confidence TrajectoryFrom Loss Over Gain Domain Figure 2: Confidence Trajectory
From Gain Over Loss Domain
arrows [downward] push in the direction of gain. By contrast, Figure 2 depicts a sample
function over the random interval I() = [p, pg()]. It starts at Cpgg (pH ) from the highest
level for loss probability pH moving from right to left, for a set function Cpgg (B), pg Bg.
Notice that the orientation of Cpgg (pH ) is equivalent to a clockwise rotation of C
p (p
Lg ) and a
reversal of directionaccording to the adjoint [backward] confidence operation K = KT.
The shaded regions overlap the Gilboa and Schmeidler (1989) convex set of prior probabilities
that determine the starting point for each trajectory. See also, (Feller, 1970, pp. 270-271).
Moreover, notice the push-pull effect induced by the opposing pull or force in the loss
direction to counter the push in gain direction. This is similar to the Langevin equation
for Brownian motion of a particle with friction. It identifies loss aversion as the source of
mean reversion in confidence and the popularity of Ornstein-Uhlenbeck processes in modeling
behaviour in mathematical finance. See e.g. (Karatzas and Shreve, 1991, pg. 358). Thus,
we state the following conjecture without proof.
Lemma 2.4 (Mean reversion in confidence). The source of mean reversion in sample paths
12
7/28/2019 Confid Rep Theorem
13/51
for confidence is loss aversion. Moreover, the confidence random function has Langevin mean
reversion representation
p C(p, ) =
)(C(p, ) E[C(p, )]
+ () (12)
where () is a F-measureable random variable, characterized in Theorem 2.3, and () is
the rate of mean reversion as a function of the loss aversion index .
Remark 2.6. Even though we offer no formal proof, Lemma 3.3 and Proposition 3.4 below
on ergodic properties of confidence implicitly support the push-pull effect described above
for this conjectural lemma.
For the purpose of empirical exposition in this note, in the sequel we consider the strong
but simple case where the kernel K is deterministic, is Lebesgue measure, and f(pg) = 1
and h(p) = 1 in (7) and (9). That is, we isolate the distribution K(p, B) in Definition 2.2.
2.3 Average confidence over Gilboa-Schmeilder convex priors
One immediate consequence of Theorem 2.3 is how to characterize expected confidence over
a convex random set of priors. Relying on Jensens Inequality, see (Feller, 1970, pp. 153-154),
we begin with the familiar setup
E[Cx()(p)] = E[(Kf)(x())] = E[z
x()
K(x, z)f(z)dz] (13)
K(x(), z) =
zx()
(w(p) p)dp = [W(z) W(x())] 1
2[z2 x()2] (14)
13
7/28/2019 Confid Rep Theorem
14/51
Evaluation of E[K(x(), z] in (14) requires us to compute and account for the following
relationships:
E[W(x())] W[E(x())] (15)
by Jensens Inequality for W convex in loss domain. Conversely, for W concave in gain
domain
E[W(x())] W[E(x())] (16)
Variance(x()) = 0 E[x()2] = E[x()]2
(17)
Equations (15) and (16) plainly show that the source function W retains its general in-
verted S-shape popularized in the literature. However, the relationship in (17) indicates that
equality between the kernel of the average, and the average of the kernel, does not hold in
general. In fact, equality holds only if x() = constant. A result with probability zero in
the context of Lebesgue measure, and our hypothesis of random convex priors. This implies
Lemma 2.5. LetK(x(), y) be a random confidence kernel. Then
E[K(x(), y] = K(E[x()], y) (18)
Proposition 2.6 (Average confidence levels across and within source).
Let Cx()(p) = (Kf)(x()) be the sample function of a random field of confidence, over a
probability domain indexed by p, with source x(). Let E[x()] be the expected source.
If E[Cx()(p)] is average confidence path across confidence levels, and CE[x()](p) is average
14
7/28/2019 Confid Rep Theorem
15/51
7/28/2019 Confid Rep Theorem
16/51
to generate deterministic confidence paths for the identity function in the confidence ker-
nel domain. Fourth, we construct a trading algorithm motivated by the maxmin program
implied by Gilboa and Schmeidler (1989). It provides a confidence based explanation for
trading behavior of financial professionals reported in Abdellaoui et al. (2012). Fifth, we
characterize the role of confidence in bubbles and crashes in large financial markets. Sixth,
we tested our source set theory by estimating confidence betas across and within source sets
derived from using CBOE VIX to split Gallup Economic Confidence Data into source sets.
3.1 Construction of confidence field
Let eg D(K) and
D(K
) be the vector valued identity function in the domain of Kand K, respectively. In effect, eg is a r 1 basis vector for gain domain with 1 in the g-th
location and 0 otherwise, g = 1, . . . , r. So that Ir = [e1 . . . er] is a r r identity matrix.
Similarly, the derived basis for loss domain is defined
i(ej ) =
0 i = j
1 i = j
i = 1, . . . , m (20)
Thus we generate a dual basis for loss domains. So that Im = [1 . . . m] is a m m identity
matrix. For example, let zzz = [z1 . . . z r]T, where T stands for transpose, be a column vector
that represents the coordinates of a vector valued function in D(K). We write
zzz = z1e1 + . . . + zrer (21)
= Irzzz (column notation) = zzzTIr (row notation) (22)
16
7/28/2019 Confid Rep Theorem
17/51
with respect to the basis in gain domain. By the same token, we expand the operator K in
vector notation to get the matrix
K = [kkk.1 . . . kkk.r] (23)
where kkk.j, j = 1, . . . , r is a m 1 column vector such that
kkkT.j = [k1j k2j . . . kmj] (24)
However, the i-th row of K is given by
kkki. = [ki1 ki2 . . . kir], i = 1, . . . , m (25)
which runs through r-dimentional gain domain. Thus, the operation
C = KIr = [kkkT1. kkk
T2. . . . kkk
Tm.]
T (26)
generates m-rows of 1 r vectors. The -th row corresponds to the projection of initial
loss probability p over gain domain. It is a basis field for that initial loss probability, since
it was generated by the identity matrix Ir in a manner consistent with the resolution of a
vector in (22). A similar analysis shows that Cg = KIm generates r-rows of 1 m vectors.
The g-th row corresponds to the projection of initial gain probability pg over loss domain.
It is a basis field for initial gain probability. In the context of the notation that follows,
the basis matrices are C = [Cp1T
1 . . . C pmT
m ]T and Cg = [Cp1T
1 . . . C prT
r ]T. In order to
isolate the deterministic confidence basis field effect we did not randomize the matrices. For
example, for loss priors that would require a process equivalent to p() = p + () where,
17
7/28/2019 Confid Rep Theorem
18/51
for some variance 2, random draws are taken according to (0, 2). Whereupon C ()
and Cg () would be random matrices containing the configurations of sample functions of
random fields of confidence generated by randomized priors. That analysis is outside the
scope of this paper. Even so, the deterministic confidence fields capture the gist of the
domain transformation(s) as indicated below. By way of illustration, consider the 2 3
Figure 3: Example of basis field orientation
matrix K, where m = 2 measures for loss and r = 3 measures for gain, correspond to a
6-point [indexed] probability domain such that
K =
a b c
d e f
K = KT =
a d
b e
c f
(27)
Figure 3 on page 17 depicts the orientation of the confidence basis field. There, we depict
the two paths generated by K: abc and def, as being downward sloping from left to right.
These are the basis field generated over [indexed] gain domain (not shown on horizontal
axis) for two prior loss probabilities p1 and p2 , say. They represent risk seeking over losses.
By contrast, K = KT generates three paths:ad,
be,
cf. They are upward sloping from
18
7/28/2019 Confid Rep Theorem
19/51
left to right. That orientation was obtained by reversing the direction ofabc and
def to
abc
anddef, and then rotating clockwise. These are the basis fields generated for three prior
gain probabilities pg1, pg2, pg3, say. They represent preference reversal from risk seeking.
Vizly, risk aversion over gain domain.
3.2 Confidence preferences
In what follows we introduce a functional representation for confidence induced preferences.
Suppose that P and Q are probability measures that belong the the space M of probability
measures such that the confidence operator K(P, Q) is meaningful on M M. That is,
K is defined on subsets MP and MQ of M. Let be a decomposable measure on M, and
V(P) be an abstract utility function defined on P. Classic VNM utility posits that if P is a
probability measure over a suitable space X, then
U(P) =
xX
u(x)dP(x) (28)
However, in the context of our theory
VP(Q) = (KU)(P) =
MQ
K(P, Q)U(Q)(dQ) (29)
=
MQ
K(P, Q)
xX
u(x)dQ(x)
(dQ) (30)
The nature of the product measure for PQ MM and Fubinis Theorem, see (Gikhman
and Skorokhod, 1969, pg. 97), allows us to write the foregoing as
VP(Q) =
MQ
xX
K(P, Q(x))u(x)Q(dx)(dQ) (31)
19
7/28/2019 Confid Rep Theorem
20/51
Thus VP(Q) is the confidence adjusted VNM utility function. Our theory suggests that ifP
is in loss probability domain, and Q is in gain probability domain, then our subject is risk
seeking over losses in hope of gain. Thus, = + is the hope of gain measure. In which
case, the confidence adjusted VNM utility function VP(Q) is convex for given P. And we
should rewrite (31) as:
VP(Q) =
MQ
xX
K(P, Q(x))u(x)Q(dx)+(dQ) (32)
Recall that K is the adjoint of K so it induces the measure = . So by the same token,
we can write
VQ(P) =
MP
xX
K(P(x), Q)u(x)P(dx)(dP) (33)
According to our theory, because Q corresponds to gain probability, our subject is risk
averse for given Q in fear of loss. Thus, the induced fear of loss measure is , and VQ(P) is
concave for given Q. It is in effect an affine transformation of VNM. Equation 32 and (33)
suggest that has a classic Hahn decomposition consistent with a Radon measure on M.
See (Gikhman and Skorokhod, 1969, pg. 47) and (Edwards, 1965, pg. 178). In which case
we just proved the following
Proposition 3.1 (Confidence induced decomposition of measures). The confidence opera-
tions K and K induce a decomposable measure on probability domain M.
The confidence operations above transform VNM utility into (Tversky and Kahneman,
1992, pg. 303) value functions, and operationalize Yaari (1987) duality theory. In fact,
following Tversky and Kahneman, let f = (xi, Ai), i = 1, . . . , n be a prospect over disjoint
events Ai with probability distribution characterized by p(Ai) = pi. So (xi, pi) is a simple
lottery. (Tversky and Kahneman, 1992, pg. 301) proposed the following scheme for decision
20
7/28/2019 Confid Rep Theorem
21/51
weights (i) derived from operations on a probability weighting function w decomposed over
gains w+ and losses w. Rank xi in increasing order so that it is dichotomized by a reference
value. Positive outcomes are associated with +, and negative outcomes by . Neutral
outcomes by 0. So that for a given value function v over X we have
+n = w+(pn),
m = w
(pm) (34)
+i = w+
ns=i
ps
w+
ns=i+1
ps
, 0 i n 1 (35)
i = w
ir=m
pr
w
i1r=m
pr
, (m 1) i 0 (36)
V(f+) =
ni=0
+i v(xi) (37)
V(f) =0
i=m
i v(xi) (38)
In the context of our model, let and + be the decision weights distribution for negative
and positive outcomes, respectively. We claim that there exists some kernel K(, +) such
that, assuming X is continuous, and using notation analogous to that for VNM utility
V
(+) = V(f+) = (Kv)(f+) =
xX
K(, +(x))v(x)+(dx) (39)
V+
() = V(f) = (Kv)(f) =
xX
K((x), +)v(x)(dx) (40)
We summarize the above in the form of a
Lemma 3.2 (Confident decision operations).
Let f = (X, A) be a prospect with outcome space X and discrete partition A, and P be a
probability distribution over X. Let v : X R be a real valued value function defined on X.
Let and + be the distribution of decision weights over negative and positive outcomes,
respectively, obtained byTversky and Kahneman(1992) decision weighting operations. There
21
7/28/2019 Confid Rep Theorem
22/51
exists a confident decision operator K defined on + such that the value functional
over f is given by
V(f) = (Kv)(f) (41)
Remark 3.1. We note that Chateauneuf and Faro (2009) introduced a functional represen-
tation for confidence based on utility over actscontrolled by a confidence function defined
on a level set of priors. Our functional is distinguished because it is predicated on decision
weights, and the confidence operator introduced in this paper.
3.3 Model Simulation
To test the predictions of our theory, a sample of 30-probabilities were generated by sep-
arating the unit interval [0, 1] into 29-evenly spaced subintervals. Including endpoints, we
produce n = 30 observations. Thus, the fixed point probability p = 0.34 separated the
interval into m = [np] = 10 loss probabilities, and r = 20 gain probabilities. So we were
able to generate 20 confidence index measures for each loss probability p by letting gain
probabilities pg run through gain domain. Similarly, we generated 10 confidence measures
for each gain probability pg by letting loss probabilities p run through loss domain. This
procedure generated a term structure for a confidence field as indicated in Figure 4 for K,
and Figure 5 for K on page 23.
In Figure 4 on page 23, the highest curve, Loss1, corresponds to the deterministic con-
fidence trajectory Cp=1(pg). It represents the evolution or distribution of confidence by and
through the set function Cp=1(Bg) when prior loss probability p=1 B is as close to zero
as possible. In that case, our subject is overconfident when faced with small probability of
lossas indicated by the positive value of the index. As the prior probability of loss increases,
confidence wanes over Bg. Thus, from Gain11 (by abuse of notation this corresponds to the
22
7/28/2019 Confid Rep Theorem
23/51
index g = 11 Tg) onwards our subject becomes under confident and loss averse. The lower
confidence curves are based on higher initial values for prior loss probabilities. So if our
subject begins the process with less confidence [or fear], then it carries over throughout the
process. For example, at Loss10, ie, Cp=10(pg) our subject begins with little or no confidence,
ie, prior loss probability is p=10 B, and becomes fearful of losing Gain4 at index g = 11.
By contrast, Figure 5 on page 23 depicts the transformation of loss domain into hope of gain.
There, our subject is risk seeking over losses. In fact, for prior gain probability pg=11 Bg,
when faced with the possibility of Gain11, ie, Cpgg=11(p), onwards over B, our subject is
overconfident by and through the set function Cpgg=11(B) over the entire loss domain indexed
by T. So the curves in Figure 4 and Figure 5 indicate a momentum factor for confidence
levels predicated on the convex set of prior probabilities B and Bg. That is, for transforma-
tion matrix K, higher confidence induced by small prior probability of loss in B serves as a
confidence builder. This carries over deeper in gain domains before risk aversion kicks in and
subjects become under confident and fearful. By the same token, for transformation matrix
K we have overconfidence and hope induced by relatively small probability of gain in Bg.
Therefore, the distribution of prior loss [gain] probabilities is a predictor of the evolution of
confidence paths.
3.4 A confidence based program trading algorithm
In Figure 6 the convex region of feasible trades is enclosed by the dark red pseudo demand
and supply lines and the x-axis. There, in the spirit of Gilboa and Schmeidler (1989) we can
apply eight minimax, maximin, minimin, maximax criteria over the curves whose indexes
coincide with that of their respective generating prior(s). For example, for pseudo-demand
23
7/28/2019 Confid Rep Theorem
24/51
7/28/2019 Confid Rep Theorem
25/51
7/28/2019 Confid Rep Theorem
26/51
ELSE
Confidence incoherent
BEGIN
IF no feasible tradeTHEN
stop
ELSE
implement arbitrage strategy
BEGIN
Let decompose source set S :=N
i=1 Ai
Let feasible arbitrage bound := , and Ci Ai
Let A be an indicator, select C =1
N Ni=1 CiAi(Ci)
IF |CiAi(Ci) C| >
THEN
trade accordingly
END
END
The quantity |CiAi(Ci) C| > reflects Langevin or Ornstein-Uhlenbeck process type
equation conjectured in Lemma 2.4. Also, in subsubsection 3.6.1 in the sequel, we provide
empirical evidence of the existence of a confidence beta arbitrage strategy that satisfies the
trade accordingly instruction above. Of necessity, there are41
41
= 16 possibilities
to consider. Assuming that each of the 4 confidence coherent trades are feasible multiple
equilibria, the other 12-possibilities suggest the existence or either arbitrage trading or no
trade possibilities. (Hill, 2010, pg. 28) also presents a confidence based model different from
ours in which an investor employs a program based on probability judgments. We note
that even though our program implies the existence of incomplete markets, our confidence
coherence approach is distinguished from Artzner et al. (1999) whose interest lie in coherent
measures of risk. See also, Fedel et al. (2011).
26
7/28/2019 Confid Rep Theorem
27/51
Our model also has implications for asset pricing because it explains the trajectory and
sensitivity of momentum strategies relative to prior probabilities, i.e. starting dates. For
instance, Moskowitz et al. (2012) conducted a study in which they describe a purported
time series momentum asset pricing anomaly as follows:
[W]e find that the correlations of time series momentum strategies across assetclasses are larger than the correlation of the asset classes themselves. This sug-gests a stronger common component to time series momentum across differentassets than is present among the asset themselves. Such a correlation structureis not addressed by existing behavioral models.
Undeniably, Proposition 2.6 provides a behavioral model[] explanation for the seeming
asset pricing anomaly. The across asset class correlation is our E[Cx()(p)]. It is tanta-
mount to averaging across asset classes. By comparison, the within asset class correlation
is our CE[x()](p). In effect, Moskowitz et al. (2012) trading strategy is sensitive to source
sets. In the context of our model, their set of asset classes is a source set in which each
class is accompanied by a different prior. Consequently, the within and across average is
different. In fact, Maymin et al. (2011) conducted a study, and Monte Carlo experiments
which plainly show that Moskowitz et al. (2012) momentum strategy is sensitive to start
date, i.ee. priors.
3.5 Market Sentiment: hope, fear, bubbles, crashes, and chaos
If we think of our subjects as players in financial markets, then one admissible interpretation
ofFigure 6, is that when confidence levels are high, investors are not very risk averse. So they
are on higher downward sloping [pseudo demand] confidence paths, i.e. demand for credit
is high5. Suppliers of credit for this veritable irrational exuberance are unable to satisfy
demand on their current [pseudo supply] confidence path. So there is a structural shift to
5The analysis that follow is distinguished from Rigotti et al. (2008).
27
7/28/2019 Confid Rep Theorem
28/51
the left onto higher [pseudo supply] confidence curve, i.e. interest rates go up in response to
increased demand for credit. It should be noted that the lowest level demand side confidence
curves do not intersect with any upward sloping curves. That scenario represents credit
rationinginvestors whose confidence level is so low, and risk aversion is so high, that they
opt out of the credit market because their demands are not met. Cf. Stiglitz and Weiss
(1981). When there is a decrease in confidence, the confidence curves shift to the right. The
foregoing scenario is reflected in Figure 7 which depicts Gallup Monthly Confidence Index
data obtained from surveys for the period 2000-2007. That index is computed from the
formula6:
GDECIndex =
1
2
%Survey economic condition rated
(Excellent + Good
Poor)
+ %Survey economic condition rated
Getting Better Getting Worse
(50)
The index has a theoretical range [100, 100]. There, positive numbers represent over
confidence and negative numbers under confidence. The slope of the confidence trajectory
over varying time ranges depict a term structure for confidence. In our model, the time index
is replaced by indexed loss/gain probabilities. So we have a term structure for our confidence
field. See Goldstein (2000). Perhaps most important, Gallups index in (50) contain the
component parts of Tversky and Wakker (1995) impact event which turns possibility into
certainty; impossibility into possibility; versus a possibility more or less likely. Specifically,
Excellent + Good implies Certainty; Poor implies Impossibility; Getting Better
and Getting Worse are possibility more or less likely events. In effect, the GDECI is
6See http://www.gallup.com/poll/123323/Understanding-Gallup-Economic-Measures.aspx. Last visited4/29/2012.
28
http://www.gallup.com/poll/123323/Understanding-Gallup-Economic-Measures.aspxhttp://www.gallup.com/poll/123323/Understanding-Gallup-Economic-Measures.aspx7/28/2019 Confid Rep Theorem
29/51
based on a subjective probability measure based on bounded subadditivity.
In practice, the scenario above takes place in a stochastic environment depicted by Fig-
ure 7. To see this analytically, letM be the space of all probability measures; be a sample
space; S and F the -field of Borel measureable subsets of Mand , resp. and be a
Levy-Prokhorov metric. See (Dudley, 2002, pg. 394). So that (M M,S S, ) is a
Levy-Prokhorov measure space. The Levy-Prokhorov metric is a measure of the distance
between two probability measures7. So it captures impact events of the type in (50). On
some ambient space, assume that prior loss probabilities are randomized on some convex set
B S, from loss to gain domains (and vice versa for gain to loss domain). This assumption
is consistent with a subjects response to ambiguity aversion in the sense of Ellsberg (1961),
and the proposed maximin program in Gilboa and Schmeidler (1989). Thus, we modify
Definition 2.3 to account for a generalized sample function from the Markov random field
of confidence, see Kinderman and Snell (1980) generated by prior beliefs. To that end, we
have pg Bg, and in particular p() (p, ) B . By abuse of notation, the sample
function Cp (pg, ) Cp() (pg) over gain probability domain is, in its most general form
Cp (pg, ) =
pgBp()
K(p(), pg Bg)(dpg)
=
pgBp()
w(p)(dp) {(p(), pg)| (p(), pg) B Bg}, B Bg S2
(51)
where is a measure on gain [loss] probability domain, {(p(), pg)| B Bg} is some
function of the Levy-Prokhorov metric for loss-gain measures on B Bg. See e.g. Strassen-
Dudley Representation Theorem introduced by Strassen (1965) and Dudley (1968). A similar
relation holds for sample functions Cpgg (p, ) of random fields from gain to loss domains.
7A recent paper by Hill (2010) introduced a metric on probability spaces to characterize probabilityjudgments. We leave the implementation of that for another day.
29
7/28/2019 Confid Rep Theorem
30/51
The measureable Tversky and Wakker (1995) impact events in loss and gain domains are:
A = {| p1 (B) } F (52)
Ag = {| p1g (Bg) } F (53)
Consider an animal spirit in a large market of size N = m + r with aggregate
pseudo demand DN,(K()) and aggregate pseudo supply SN,g (K()) to be defined below.
According to our model, the elementary Tversky-Wakker event A Ag is perceived
differently in loss and gain domains. Ifpj() is the random prior loss probability for the j-th
subject, and pkg () the random prior gain probability for the k-th [adjoint] subject, then we
have
DN,(K()) =m
j=1
Cpj
(pjg, ) O
Dp (a(m)). A (54)
SN,g (K()) =
rk=1
Cpkgg (p
k , ) O
Sp (b(r)), Ag (55)
where ODp (a(m)), OS
p (b(r)) are probabilistic growth rates, for [slow varying] functions a(m)
and b(r); pi() is i-th personal probability. If DN,(K()) = SN,g(K()) in equilibrium, but
ODp (a(m)) OS
p (b(r)) , then aggregate demand is growing much faster than supply. So we
have an eventual bubble. We summarize this in the following
Lemma 3.3 (Ergodic confidence).
Let T = KK , f D(T) and D(K) D(K) D(T). Define the reduced space D(T) =
{f| f D(K)D(K) D(T). And letB be a Banach-space, i.e. normed linear space, that
contains D(T). Let(D(T),T, Q) be a probability space, such that Q andT is a probability
measure and -field of Borel measureable subsets, on D(T), respectively. We claim that Q
is measure preserving, and that the orbit or trajectory of T induces an ergodic component of
30
7/28/2019 Confid Rep Theorem
31/51
confidence.
Proof. See Appendix subsection A.2.
Remark 3.2. We note that by construction T = KTK . So that sgn(T) () ; sgn(T2)
(+) , and sgn(T3) () satisfy the 3-period prerequisite for chaos. See (Devaney, 1989,
pp. 60, 62). Typically, ergodic theorems imply the equivalence of space and time averages.
However, consistent with Assumption 2.1, our focus is on space averages in this paper.
Proposition 3.4 (Large deviation: Almost sure bubbles and crashes). Assume that confi-
dence levels are ergodic. So that according to Birkhoff-Khinchin ergodic theorem there exist a
limiting confidence trajectory C. LetDN,(K()) and SN,g(K()) be aggregate demand and
supply for an elementary Tversky-Wakker impact event; anda(N) = max{a(m), (b(r)}, N =
m+ r. Then the probability that there will be bubbles and crashes induced by confidence levels
in excess of that consistent with market equilibrium, for some > 0, is given by
P
lim supN
a(N)1|DN,(K()) SN,g(K())| >
=
E[C2]
2a.s (56)
P
lim supN
a(N)1DN,(K()) SN,g(K()) > e
E[C2]2
1
(57)
Proof. See subsection A.3.
Remark 3.3. Since is arbitrary, choose
= CL2 = |E[C2]|
12 =
M
C2d12
31
7/28/2019 Confid Rep Theorem
32/51
to get an almost sure result. The assumption of ergodic confidence is unnecessary in this
case because the choice of implies ergodicity in as much as the RHS of (56) is now 1.
Rearrangement of (56) for a(N) large, implies that we can write
P
a(N)1DN,(K()) SN,g(K()) > e
E[C2]2
1
(58)
This means that a large deviation (a tail event) between confidence induced pseudo-demand
and pseudo-supply occurs with uniform probability e
E[C2]2
1
that depends on ergodicity
of confidence.
Given sufficient time, another elementary Tversky-Wakker elementary event or animal
spirit 0 causes confidence to wane. So the [asymmetric] growth rates are reversed
demand growth is much lower than supplyand the market crashes to where ODp (a(m))
OSp (b(r)). The fluctuations of positive and negative growth rates, attributable to ergodic
confidence levels, suggest that our relatively simple model is able to capture stylized facts
about bubbles and crashes. Thus, confidence growth should be a policy control variable.
In fact, consistent with our theory, the steep downward slope of the confidence trajectory
which began in early 2007 in Figure 7 predicted the Great Recession which began around mid-
2007. Most important, the ergodic element of the probability relationship in (56) guarantees
that the market will be in disequilibrium at any given time unless E[C2] = 0 or 2
. The former requires that the second moment for confidence be zeroin which case
long run dispersion in confidence levels must be zero. A highly unlikely event. The latter
condition implies that the deviations between market demand and supply be infiniteanother
impossibility. So our erstwhile large deviation proposition offers probabilistic proof that there
is no stable market equilibrium. Thereby providing a confidence based proof to Foley (1994)
entropy motivated statistical market equilibrium. We summarize the foregoing with the
32
7/28/2019 Confid Rep Theorem
33/51
following
Proposition 3.5 (Chaotic behaviour).
The dynamical system induced by confidence is chaotic because:
(1) It is sensitive to initial conditions;
(2) It is topologically mixing;
(3) Its periodic orbits are dense.
Proof. By hypothesis, ambiguity implies (1). In the proof of Lemma 3.3, it is shown that the
orbit generated by the operator T constructed from Kand K is 3-period ()(+)(). That
implies (3) by and through Sarkovskiis Theorem. See (Devaney, 1989, pp. 60, 62). Moreover,
according to (Vellekoop and Borglund, 1994, pg. 353) and (Banks et al., 1992, pg. 332) (1)
and (3) (2). But (2) is weakly equivalent to our topologically transitive result in
Proposition 3.4. See (Devaney, 1989, pg. 42). So by definition, the dynamical system
induced by the random initial conditions attributed to ambiguity is weakly chaotic.
Proposition 3.5 implies that dynamical systems induced by confidence are weakly unpre-
dictable. By contrast, one collateral benefit of Proposition 3.4 is that it allows us to obtain
a crude tail index and value-at-risk (Var) estimate for market confidence in the following
Example. Let be the tail index in Proposition 3.4. Then we obtain the estimate
= e
E[C2]2
1
=
E[C2]2
1
log()
By plugging in the bounds 0 < 2 for the tail index we obtain the VaR as the solution
to E[C2]2
1 < 2log()
33
7/28/2019 Confid Rep Theorem
34/51
3.6 Confidence source sets and VIX induced confidence beta
In this subsection we conduct a weak test of the source set hypothesis by using pseudo techni-
cal analysis of two popular time series for confidence: the Gallup Daily Economic Confidence
Index (GDECI) and the Chicago Board of Exchange (CBOE) VIX daily series. There are
solid theoretical and empirical reasons for the selection of these series evidenced by Fox et al.
(1996) (option traders exhibit bounded subadditivity) and Lemmon and Portniaguina (2006)
(consumer confidence forecast small stocks but not variations in time series momentum). The
GEDCI is computed from survey response as indicated by the formula in ( 50) which soundslike Tversky and Wakker (1995) impact event. VIX is computed from implied volatility for
a sample of option prices, and it measures the markets expectation of near term volatility
or uncertainty. See e.g. Chicago Board of Exchange (2009) for details on formula. Gallup
did not report daily confidence index measures before 20088. So our comparison with CBOE
VIX daily series is limited to the post 2008 period between 2008:1:122012:03:28
Figure 8 on page 33 depicts the VIX daily close with basis field orientation for confidence
paths. Undeniably, those paths mimic the predictions of our theory. We conducted an eyeball
test by cross plotting GDECI vs. VIX and roughly identified two confidence regimes in the
data. See Figure 9. We attribute those to Source Sets A and B which we extrapolated and
plotted in Figure 10 and Figure 11. Source Set A represents the period between 20008:03:13
2009:09:17 which lies in the core of the global financial crisis, and Great Recession, in capital
markets. A period of great uncertainty or ambiguity in the context of our model. Source
Set B represents the period 2009:09:182012:03:28. Evidently, agents in the economy were
relatively less uncertain even though there was still ambiguity. We stated and tested the
null hypothesis implied by Proposition 2.6 as follows:
8Private communication from Zach Bikus.
34
7/28/2019 Confid Rep Theorem
35/51
Figure 8: CBOE VIX DailyMarketUncertainty
Figure 9: Gallup Daily EconomicConfidence Index vs. VIX: SourceSet (A B)
Figure 10: Gallup Daily EconomicConfidence Index vs. VIX: SourceSet A
Figure 11: Gallup Daily EconomicConfidence Index vs. VIX: SourceSet B
35
7/28/2019 Confid Rep Theorem
36/51
HYPOTHESIS
H0H0H0 : E[Cx()(y)| x() Source Set(A B)]
= CE(x()(y)| x() {Source Set(A) Source Set(B)}
(59)
HaHaHa : H0H0H0 not true (60)
Here, x() is a random source (in our case an unobserved prior probability induced by
ambiguity) in the sets indicated for projection of confidence over y. We ran a simple linear
regression of VIX on GEDCI (denoted as CONF) as a proxy for the average confidence
path across Source Set (A B). In effect VIX is an instrumental variable for y. A similar
regression was run within Source Set A and Source Set B9. To wit, CONF(source)(V IX) is
our instrument or measure for Cx()(y). The results are reported as follows:
CONF(AB) = 23.7920(0.9203)
0.4946(0.0324)
VIX(AB), R2 = 0.1903, SSE= 142505.2, nAB = 1069
(61)
CONF(A) = 44.1011(1.3104)
0.2225(0.0386)
VIX(A), R2 = 0.0899, SSE= 43588.95, nA = 383
(62)
CONF(B) = 22.5088(0.3020)
0.2594(0.7212)
VIX(B), R2 = 0.0974, SSE= 17307.76, nB = 686
(63)
The equations show that even though confidence beta and R2 for Source Set A and Source
Set B appear similar 10, Source Set A appears dissimilar to Source Set (A B). For example,
9These erstwhile source functions are different from that in Abdellaoui et al. (2011). The latter is basedon representation of probability weighting functions.
10There is a subset of Source Set B, vizly the period 2008:1:22008:03:12, which generated the confidencebeta pricing equation
CONF(subsetB) = 16.278(7.3474)
0.7011(0.2835)
VIXsubsetB, R2 = 0.1152, n = 49, F = 6.1173, SSE= 849.1184
36
7/28/2019 Confid Rep Theorem
37/51
approximately 9% of the variability in confidence levels is explained by VIX within each
of source sets A and B, according to (62) and (63). By contrast, 20% of the variability in
confidence is explained by VIX across source sets (A B) in (61). Source Set A represents
a period of comparative uncertainty or ambiguity evidenced by its relatively large intercept
term A = 44.1011 and higher dispersion SS E. Thus, reflecting a comparatively high prior
probability of loss xA = pA xB = pB and diffusion of confidenceconsistent with response
to the financial crisis in 2008 reflected in Source Set A data for 20008:03:132009:09:17, and
the market crash predictions of Proposition 3.4. In order to test H0 in (59) we employ the
Chow-Test, see Chow (1960), explained in (Kmenta, 1986, pg. 421), whose statistic is given
by(SSEABSSEASSEB)
K(SS EA+SS EB)
nA+nB2K
FK, nA+nB2K (64)
There, K = 2 is the number of parameters in each equation. The computed statistic is
F = 434.8265 with (2, 1065) degrees of freedom. Thus p 0.01 and we reject H0 on the
grounds that the sources or priors in Source Set A and Source Set B are not drawn from
the same distribution.
3.6.1 Confidence beta arbitrage
Another useful exercise is comparison of relative confidence betas:
AB
= 0.86A
AB= 0.45
BAB
= 0.52 (65)
Roughly, the relative confidence beta (0.86) for Source Set A and Source Set B is larger
than the relative beta for each source set across Source Set A B. The relative betas are
an implicit comparison of the growth in confidence. Thus, subjects in Source Set A started
with a much higher prior loss probability implied by CONF (A)(V IX) in (62). Consequently,
That may explain the discrepancy between (63) and (62).
37
7/28/2019 Confid Rep Theorem
38/51
they were comparatively less risk seeking than subjects in Source Set B which supports a
steeper slope B. Recall that the vertical intercept in the basis field example illustrated in
Figure 3 on page 17, as well as Figure 4 on page 23, correspond to initial value for prior loss
probability p. By the same token, slope comparison show there is more risk seeking in the
bear market reflected by AB supported by Source Set AB, compared to A, B in Source
Sets A and B. Compare Chen (2011). Even though subjects in Source Set B started with
a prior loss probability close to the market as evidenced by the intercept terms in (63) and
(61). This suggests a confidence beta arbitrage strategy in which investors characterized
by Source Set A could buy put options on investors characterized by Source Set B by virtue
of relative confidence beta analytics. Whether these relative confidence betas could explain
so called beta arbitrage in asset pricing theory is left to be seen. See e.g. Frazzini and
Pedersen (2010).
4 Conclusion
We introduced a confidence kernel operator which establish a nexus between the multiple
prior, and source function paradigms in decision theory. Further, the operator generates
a field of confidence paths that mimic popular confidence indexes. So our model extends
the solution space for confidence to integral equations and operator theory. Preliminary re-
search in progress suggests that heteroskedasticity correction models for volatility clustering
in econometrics mimic inverse confidence operations. So the confidence kernel operator may
provide a new mechanism for heteroskedasticity correction by virtue of its data transform-
ing mechanism. Cf. (Kmenta, 1986, pg. 280). Thus, we are able to answer questions like
what preference functions in the domain of confidence kernels generate an observed confi-
dence path. Additional research questions include but is not limited to whether butterfly
effects in sample paths for confidence, arising from small perturbation of prior probabilities,
38
7/28/2019 Confid Rep Theorem
39/51
can explain chaotic behavior. Thus making human behavior unpredictable.
A Appendix of Proofs
A.1 Proof of Theorem 2.3Confidence Representation
This proof extends (Gikhman and Skorokhod, 1969, Thm. 2, pg. 189) to account for the
probabilistic nature of the random domain I() M. Let n and n(p) be the n-th eigen-
value and corresponding eigenfunction of (p1, p2). By definition () is positive definite.
Without loss of generality let (dp) be Lebesgue measure on M. According to Mercers
Theorem, see (Reisz and Sz.-Nagy, 1956, pg. 245) and (Loeve, 1978, pg. 144) we can write
(p1, p2) =
n=1
nn(p1)n(p2), n > 0 n (66)
nn(p1) = E
I()
(p1, y)n(y)dy F, E
I()
m(p)n(p)dpF = m,n (67)
where I() is the random domain of definition for n, and m,n is Kroneckers delta. Define
n() =
I()
C(p, )n(p)dp (68)
E[n()m()] = E[n()m()]| F] (69)
= E
I()
I()
(p1, p2, )n(p1, )m(p2, )dp1dp2F = nm,n (70)
C(p, )n() =
I()
(p, y, )n(y)dy = n()n(p, ); a.s. P (71)
39
7/28/2019 Confid Rep Theorem
40/51
The latter representation introduces a probabilistic component to the proof which necessi-
tated the ensuing modification. Consider the following probabilistic expansion based on the
component parts above
EC(p, ) N
n=1
n()n(p, )2 F (72)
= E
(p, p) 2N
n=1
C(p, )n()n(p, ) +N
n=1
n()|n(p, )|2
(73)
= E
(p, p) N
n=1
n()|n(p, )|2
(74)
Let A
N be the characteristic function of an event A
, and for N() > 0 sufficiently largedefine the random set
AN =
(p, p) N
n=1
n()|n(p, )|2 N(), E[AN] = P(AN) (75)
According to Parsevals Identity in L2, see e.g. (Yosida, 1960, pg. 91), we have
(p, p)2 = limN
E
Nn=1
n()|n(p, )|2
(76)
Let N() =
j=N+1
j()|n(p, )|2, so that N() 0 (77)
We note that in the Karhunen-Loeve representation one typically assigns E[n()] = 0. So
that {n()}n=1 is an iid mean zero sequence with finite second moments. That facilitates
application of our results but may not be necessary here. See (Ash, 1965, pp. 277-279).
By construction C(p, ) C[0, 1] and is bounded and continuous in p. By virtue of
the classic sup-norm f = supx |f(x)|, f C[0, 1], and the induced metric C(f, g) =
supx |f(x) g(x)| on C[0, 1], according to Theorem 1 in (Gikhman and Skorokhod, 1969,
40
7/28/2019 Confid Rep Theorem
41/51
pp. 449-450), we have
limN
supN
sup|pp|0
P(p, p)
N
n=1n()|n(p, )|
2
> N()
= 0 (78)
Thus we get a weaker result (p, p) L(M,S, ). By absolute continuity of (p, p) on L, we
apply Kolmogorovs Inequality in L2 L, and the probabilistic continuity criterion above
to get
limN
P(AN) =limN E
(p, p) Nn=1 n()|n(p, )|222N()
= 0 (79)
So that limN
Nn=1
E[n()|n(p, )|2| F] = (p, p) a.s. P (80)
By definition of P(A) in (75) and the result in (80), all of which is based on the incipient
expansion in (72), we retrieve the desired result
C(p, ) = limN
Nn=1
n()n(p, ) a.s. P (81)
A.2 Proof of Lemma 3.3Ergodic Confidence
Before we begin the proof, we state and prove the following Lemma, and restate the lemma
to be provedLemma 3.3 for convenience.
Lemma A.1 (Graph of confidence).Let D(K), D(K) be the domain of K, and K respectively. Furthermore, construct the
operator T = KK. We claim (i) that T is a bounded linear operator, and (ii) that for
f D(K) the graph (f , T f ) is closed.
41
7/28/2019 Confid Rep Theorem
42/51
Proof.
(i). That T is a bounded operator follows from the facts that the fixed point p
inducessingularity in K and K. Let {Tn}
n=1 be a sequence of operators induced by an
appropriate corresponding sequence of Kns and Kn
s, and (Tn) be the spectrum of
T. Thus, we write Tn =Nn
j=1 j, j (Tn), where Nn = dim (Tn). Singularity
implies limNn Nn = 0 and for (T), we have limn Tn T limn |n
|f = 0. Thus, Tn T is bounded.
(ii). Let f D(K) and C(x) = (Kf)(x). So (T f)(x) = (K
Kf)(x) = (K
f
)(x) = C
(x)for f D(T). For that operation to be meaningful we must have f D(K). But
T = TT = KTK = T f D(T). According to the Open Mapping Closed
Graph Theorem, see (Yosida, 1980, pg. 73), the boundedness of T guarantees that the
graph (f , T f ) D(T) D(T) is closed.
Restated lemma:
Lemma A.2 (Ergodic confidence).
Let T = KK , f D(T) and D(K) D(K) D(T). Define the reduced space D(T) =
{f| f D(K) D(K) D(T). And letB be a Banach-space, i.e. normed linear space,
that contains D(T). Let (B,T, Q) be a probability space, such that Q andT is a probability
measure and -field of Borel measureable subsets, onB, respectively. We claim that Q is
measure preserving, and that the orbit or trajectory of T induces an ergodic component of
confidence.
42
7/28/2019 Confid Rep Theorem
43/51
Proof. Let f D(T). Then (T f)(x) = (KKf)(x) = (Kf)(x) = C for f D(T). But
T = TT = (KTK) = KTK = T f D(T). Since f is arbitrary, then by our
reduced space hypothesis, T maps arbitrary points f in its domain back into that domain. So
that T : D(T) D(T). Whereupon from our probability space on Banach space hypothesis,
for some measureable set A T we have the set function T(A) = A T1(A) = A
and Q(T1(A)) = Q(A). In which case T is measure preserving. Now by Lemma A.1,
(T C)(x) = T(T f)(x) = (T2f)(x) (f, T2f) is a closed graph on D(T) D(T). By the
method of induction, (f, Tjf), j = 1, . . . is also a graph. In which case the evolution of the
graph (f, Tj f), j = 1, . . . is a dynamical system, see (Devaney, 1989, pg. 2), that traces the
trajectory or orbit of f. Now we construct a sum of N graphs and take their average to get
fN(x) =1
N
Nj=1
(Tjf)(x) (82)
According to Birchoff-Khinchin Ergodic Theorem, (Gikhman and Skorokhod, 1969, pg. 127),
since Q is measure preserving on T, we have
limN
fN(x) = limN
1N
Nj=1
(Tjf)(x) = f(x) a.s. Q (83)
Furthermore, f is T-invariant and Q integrable, i.e.
(T f)(x) = f(x) (84)
E[f(x)] = f(x)dQ(x) = lim
N
1
N
N
j=1
(Tjf)(x)dQ(x) (85)
= limN
1
N
Nj=1
E[Tj f)(x)] (86)
43
7/28/2019 Confid Rep Theorem
44/51
Moreover,
E[f(x)] = E[f(x)] (T E[f(x)]) = T E[f(x)] = E[(T f)(x)] = E[C(x)] (87)
So the time average in (86) is equal to the space average in (87). Whence f D(T)
induces an ergodic component of confidence C(x).
A.3 Proof of Proposition 3.4Large Market Bubble/Crash
For notational convenience let
a(N) = max{a(m), (b(r)}, N = m + r (88)
Ci() = Cpi (p
ig, ) (89)
Cig () = Cpigg (p
i, ) (90)
Now expand the summands to account for any surplus demand or supply, and to account
fo the fact that C can take positive or negative values.
a(N)1DN,(K()) SN,g(K()) = a(N)1
min(m,r)
i=1
(Ci Cig)
+
max(m,r)s=min(m,r)+1
max
0,
|Cs ()| + Cs ()
2
+ min
0,
|Cs ()| Cs ()
2
+
max(m,r)u=min(m,r)+1
max
0,
|Cug ()| + Cug ()
2
+ min
0,
|Cug ()| Cug ()
2
(91)
44
7/28/2019 Confid Rep Theorem
45/51
a(N)1min(m,r)
i=1
|Ci Cig |+
+
max(m,r)
s=min(m,r)+1
max
0,
|Cs ()| + Cs ()
2 + min
0,
|Cs ()| Cs ()
2
+
max(m,r)u=min(m,r)+1
max
0,|Cug ()| + C
ug ()
2
+ min
0,
|Cug ()| Cug ()
2
(92)
Let I1(N), I2(N), I3(N) be the value of the summands above in order. Assuming
that supply and demand are satisfied where the curves intersect ie, 0 |Ci Cig | H,
then according to (88), the growth of a(N) exceeds that of the summand for I1(N) when
|Ci Cig | = 0 for countably many i. So we have
limN
a(N)1I1(N) = 0 (93)
limN
a(N)1{I2(N) + I3(N)} = C > 0 (94)
According to Lemma 3.3, equation (94) implies that either a(N)1I2(N) = C and
a(N)1I3(N) = 0 or vice versa where C is the limiting confidence trajectory or ergodic
component. So that E[C] exists a.s. P. In which case we have from Chebychevs inequality
PlimsupN
a(N)1|DN,(K()) SN,g(K
())| >
(95)
E
a(N)1DN,(K()) SN,g (K())2
2(96)
=E[C2]
2(97)
Since is arbitrary, choose = CL2 = |E[C2]|12 and the proof is done for an almost sure
45
7/28/2019 Confid Rep Theorem
46/51
large deviation result. A more realistic result is to choose and such that
E[C2]
2= 1 2 (98)
log
P
a(N)1DN,(K()) SN,g(K()) > = log(1 2) 2 (99)
P
a(N)1DN,(K()) SN,g(K()) > e2 = e
E[C2]2
1
(100)
Remark A.1. The proof also follows from application of Borel-Cantelli Lemma to the tail
events described above.
46
7/28/2019 Confid Rep Theorem
47/51
References
Abdellaoui, M., A. Baillon, L. Placido, and P. P. Wakker (2011, April). The Rich Domainof Uncertainty: Source Functions and Their Experimental Implementation. AmericanEconomic Review 101 (2), 695723.
Abdellaoui, M., H. Bleichrodt, and H. Kammoun (2012, Oct.). Do financial professionalsbehave according to prospect theory? An experimental study. Theory and Decision, 119.Forthcoming.
Artzner, P., F. Delbaen, J.-M. Eber, and D. Heath (1999). Coherent Measures of Risk.Mathematical Finance 9(3), 203228.
Ash, R. (1965). Information Theory, Volume XIX of Pure and Applied Mathematics. NewYork, NY: John Wiley & Sons, Inc.
Baker, M. and J. Wurgler (2007, Spring). Investor Sentiment in The Stock market. Journal
of Economic Perspectives 21 (2), 129151.
Banks, J., J. Brooks, G. Cairns, G. Davis, and P. Stacey (1992, April). On DevaneysDefinition Of Chaos. American Mathematical Monthly 99(4), 332334.
Bharucha-Reid, A. T. (1972). Random Integral Equations, Volume 96 of Mathematics inScience and Engineering. New York, NY: Academic Press.
Cerreia-Vioglio, S., P. Ghirardato, F. Maccheroni, M. Marinacci, and M. Siniscalchi (2011).Rational preferences under ambiguity. Economic Theory 48, 341375.
Cerreia-Vioglio, S., F. Maccheroni, M. Marinacci, and L. Montrucchio (2011). Uncertainty
averse preferences. Journal of Economic Theory 146(4), 1275 1330.
Chateaunerf, A. and J. H. Faro (2012). On the Confidence Preference Model. Fuzzy Setsand Systems 188, 15.
Chateauneuf, A. and J. H. Faro (2009). Ambiguity through confidence functions. Journalof Mathematical Economics 45(9-10), 535 558.
Chen, S.-S. (2011). Lack of consumer confidence and stock returns. Journal of EmpiricalFinance 18(2), 225 236.
Chicago Board of Exchange (2009, 2009). The CBOE Volatility IndexVIX. Availableat http://www.cboe.com/micro/vix/vixwhite.pdf. Last visited 4/30/2012.
Chow, G. C. (1960, Jul.). Tests of Equality Between Sets of Coefficients in Two LinearRegressions. Econometrica 28(3), pp. 591605.
DeGroot, M. (1970). Optimal Statistical Decisions. New York, N.Y.: McGraw-Hill, Inc.
47
http://www.cboe.com/micro/vix/vixwhite.pdfhttp://www.cboe.com/micro/vix/vixwhite.pdfhttp://www.cboe.com/micro/vix/vixwhite.pdf7/28/2019 Confid Rep Theorem
48/51
Devaney, R. L. (1989). An Introduction To Chaotic Dynamical Systems (2nd ed.). Studiesin Nonlinearity. Reading, MA: Addison-Wesley Publishing Co., Inc.
Dudley, R. M. (1968). Distances of probability Measures and Random Variables. Annals ofMathematical Statistics 39(5), 15631572.
Dudley, R. M. (2002). Real Analysis and Probability. New York, N.Y.:Cambridge Univ.Press.
Edwards, R. E. (1965). Functional Analysis: Theory and Applications. New York, NY: Holt,Rinehart, Winston, Inc.
Ellsberg, D. (1961). Risk, Ambiguity, and the Savage Axioms. Quarterly Journal of Eco-nomics 75(4), 643669.
Fedel, H., H. Hosni, and F. Montega (2011, Nov.). A Logical Characterization of Coherencefor Imprecise Probabilities. International Journal of Approximate Reasoning 52(8), 1147
1170.Feller, W. (1970). An Introduction to Pobability Theory and Its Applications, Volume II.
New York, NY: John Wiley & Sons, Inc.
Fellner, W. (1961). Distortion of Subjective Probabilities as a Reaction to Uncertainty.Quarterly Journal of Economics 75(4), 670689.
Foley, D. K. (1994). A statistical equilibrium theory of markets. Journal of EconomicTheory 62(2), 321 345.
Fox, C. R., R. A. Rogers, and A. Tversky (1996, Nov.). Option Traders Exhibit Subadditive
Decision Weights. Journal of Risk and Uncertainty 13(1), 517.
Frazzini, A. and L. Pedersen (2010, Dec.). Betting Against Beta. NBER Working Paper,No. 16601. Available at http://www.nber.org/papers/w16601.
Gikhman, I. I. and A. V. Skorokhod (1969). Introduction to The Theory of Random Processes.Phildelphia, PA: W. B. Saunders, Co. Dover reprint 1996.
Gilboa, I. and D. Schmeidler (1989). Maxmin Expected Utility with Non-unique Prior.Journal of Mathematical Economics 18(2), 141153.
Goldstein, R. S. (2000, Summer). The Term Structure of Interest Rates as a Random Field.
Review of Financial Studies 13(2), pp. 365384.
Gonzalez, R. and G. Wu (1999). On the shape of the probability weighting function. CognitivePsychology 38, 129166.
Harrison, J. M. and D. M. Kreps (1979). Martingales and Arbitrage in Multiperiod SecuritiesMarkets. Journal of Economic Theory 20, 381408.
48
http://www.nber.org/papers/w16601http://www.nber.org/papers/w16601http://www.nber.org/papers/w166017/28/2019 Confid Rep Theorem
49/51
Hill, B. (2010). Confidence and Ambiguity. Working paper (Cahier) No. DRI-2010-03.Available at http://www.hec.fr/hill.
Hille, E. and R. S. Phillips (1957). Functional Analysis and Semi-Groups (Revised andexpanded ed.), Volume 31 of AMS Colloquium Publications. Providence, RI: American
Mathematical Society.Karatzas, I. and S. E. Shreve (1991). Brownian Motion and Stochastic Calculus. Graduate
Texts in Mathematics. New York, N.Y.: Springer-Verlag.
Kinderman, R. and J. L. Snell (1980). Markov Random Fields and Their Applications.Contemporary Mathematics. Providence, RI: American Mathematical Society.
Klibanoff, P., M. Marinacci, and S. Mukerji (2005). A Smooth Model of Decision Makingunder Ambiguity. Econometrica 73(6), 18491892.
Kmenta, J. (1986). Elements of Econometrics (2nd ed.). New York, NY: Macmilliam Pub-
lishers, Inc.Lemmon, M. and E. Portniaguina (2006, Winter). Consumer Confidence and Asset Pricing.
Review of Financial Studies 19(4), 14991529.
Lichtenstein, S. and P. Slovic (1973, Nov.). Response-induced reversals of preference in gam-bling: An extended replication in Las Vegas. Journal of Experimental Psychology 101 (1),1620.
Loeve, M. (1978). Probability Theory II, Volume 46 of Graduate texts in Mathematics. NewYork, NY: Springer-Verlag.
Luce, R. D. (2001). Reduction invariance and Prelecs weighting functions. Journal ofMathematical Psychology 45, 167179.
Maccheroni, F., M. Marinacci, and A. Rustichini (2006). Ambiguity Aversion, Robustness,and the Variational Representation of Preferences. Econometrica 74 (6), 14471498.
Machina, M. (1982, March). Expected Utility Analysis without the Independence Axiom.Econometrica 50(2), 277323.
Machina, M. J. and D. Schmeidler (1992, Jul.). A More Robust Definition of SubjectiveProbability. Econometrica 60(4), 745780.
Manski, C. F. (2004, Sept.). Measuring Expectations. Econometrica 22(2), 13291376.
Maymin, P., Z. Maymin, and G. S. Fisher (2011). Momentums Hidden Sensitivity to theStarting Day. SSRN eLibrary. Available at http://ssrn.com/paper=1899000.
McShane, E. J. and T. A. Bott (1959). Real Analysis. Princeton, NJ: Van Nostrand Co.,Inc.
49
http://www.hec.fr/hillhttp://www.hec.fr/hillhttp://ssrn.com/paper=1899000http://ssrn.com/paper=1899000http://www.hec.fr/hill7/28/2019 Confid Rep Theorem
50/51
Merkle, E. C., M. Smithson, and J. Verkuilen (2011). Hierarchical models of simple mech-anisms underlying confidence in decision making. Journal of Mathematical Psychol-ogy 55(1), 57 67. Special Issue on Hierarchical Bayesian Models.
Moskowitz, T. J., Y. H. Ooi, and L. H. Pedersen (2012, May). Time series momentum.
Journal of Financial Economics 104 (2), 228 250. Special Issue on Investor Sentiment.Pleskac, T. J. and J. R. Busemeyer (2010, Jul). Two-stage dynamic signal detection: A
theory of choice, decision time, and confidence. Psychological Review 117(3), 864901.
Prelec, D. (1998). The probability weighting function. Econometrica 60, 497528.
Reisz, F. and B. Sz.-Nagy (1956). Functional Analysis (2nd ed.). London, UK: Blackie andSons, Ltd. Translated from French edition.
Rigotti, L., C. Shannon, and T. Strzalecki (2008). Subjective beliefs and ex ante trade.Econometrica 76(5), 11671190.
Shiller, R. (2000). Measuring Bubbles Expectations and Investor Confidence. Journal ofPsychology and Financial Markets 1 (1), 4960.
Stein, E. M. (2010). Some geometrical concepts arising in harmonic analysis. In N. Alon,J. Bourgain, A. Connes, M. Gromov, and V. Milman (Eds.), Visions in Mathematics,Modern Birkhuser Classics, pp. 434453. Birkhuser Basel. See also, GAFA Geom. funct.anal., Special Volume 2000.
Stiglitz, J. E. and A. Weiss (1981, June). Credit Rationing in Markets with ImperfectInformation. American Economic Review 71 (3), 393400.
Strassen, V. (1965). The Existence of Probability Measures with Given Marginals. Annalsof mathematical Statistics 36(2), 423439.
Sundaram, R. K. (1997, Fall). Equivalent Martingale Measures and Risk-Neutral Pricing:An Expository Note. Journal of Derivatives 5(1), 8598.
Tversky, A. and D. Kahneman (1992). Advances in prospect theory: Cumulative represen-tation of uncertainty. Journal of Risk and Uncertainty 5, 297323.
Tversky, A. and P. Wakker (1995, Nov.). Risk Attitudes and Decision Weights. Economet-rica 63(6), 12551280.
Vellekoop, M. and R. Borglund (1994, April). On Intervals, Transitivity=Chaos. AmericanMathematical Monthly 101 (4), 353355.
Von Neumann, J. and O. Morgenstern (1953). Theory of Games and Economic Behavior(3rd ed.). Princeton University Press.
50
7/28/2019 Confid Rep Theorem
51/51
Wu, G. and R. Gonzalez (1996). Curvature of the probability weighting function. Manage-ment Science 42, 167690.
Yaari, M. (1987, Jan.). The duality theory of choice under risk. Econometrica 55(1), 95115.
Yosida, K. (1960). Lectures On Differential and Integral Equations, Volume X of Pure andApplied Mathematics. New York, NY: John Wiley & Sons, Inc.
Yosida, K. (1980). Functional Analysis (6th ed.). New York: Springer-verlag.
Zadeh, L. (1968). Probability Measures of Fuzzy Events. Journal of Mathematical Analysisand Applications 23(2), 421427.
Top Related