Slides univ-van-amsterdam

66
Arthur CHARPENTIER - Nonparametric quantile estimation. Estimating quantiles and related risk measures Arthur Charpentier [email protected] Universiteit van Amsterdam, January 2008 joint work with Abder Oulidi, IMA Angers 1

description

 

Transcript of Slides univ-van-amsterdam

Page 1: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Estimating quantilesand related risk measures

Arthur Charpentier

[email protected]

Universiteit van Amsterdam, January 2008

joint work with Abder Oulidi, IMA Angers

1

Page 2: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

• General introduction

Risk measures

• Distorted risk measures• Value-at-Risk and related risk measures

Quantile estimation : classical techniques

• Parametric estimation• Semiparametric estimation, extreme value theory• Nonparametric estimation

Quantile estimation : use of Beta kernels

• Beta kernel estimation• Transforming observations

A simulation based study

2

Page 3: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

• General introduction

Risk measures

• Distorted risk measures• Value-at-Risk and related risk measures

Quantile estimation : classical techniques

• Parametric estimation• Semiparametric estimation, extreme value theory• Nonparametric estimation

Quantile estimation : use of Beta kernels

• Beta kernel estimation• Transforming observations

A simulation based study

3

Page 4: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Risk measures and price of a risk

Pascal, Fermat, Condorcet, Huygens, d’Alembert in the XVIIIth centuryproposed to evaluate the “produit scalaire des probabilites et des gains”,

< p,x >=n∑i=1

pixi = EP(X),

based on the “regle des parties”.

For Quetelet, the expected value was, in the context of insurance, the price thatguarantees a financial equilibrium.

From this idea, we consider in insurance the pure premium as EP(X). As inCournot (1843), “l’esperance mathematique est donc le juste prix des chances”(or the “fair price” mentioned in Feller (1953)).

4

Page 5: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Risk measures : the expected utility approach

Ru(X) =∫u(x)dP =

∫P(u(X) > x))dx

where u : [0,∞)→ [0,∞) is a utility function.

Example with an exponential utility, u(x) = [1− e−αx]/α,

Ru(X) =1α

log(EP(eαX)

).

5

Page 6: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Risk measures : Yarri’s dual approach

Rg(X) =∫xdg ◦ P =

∫g(P(X > x))dx

where g : [0, 1]→ [0, 1] is a distorted function.

Example– if g(x) = I(X ≥ 1− α) Rg(X) = V aR(X,α),– if g(x) = min{x/(1− α), 1} Rg(X) = TV aR(X,α) (also called expected

shortfall), Rg(X) = EP(X|X > V aR(X,α)).

6

Page 7: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distortion of values versus distortion of probabilities

0 1 2 3 4 5 6

0.00.2

0.40.6

0.81.0

Calcul de l’esperance mathématique

Fig. 1 – Expected value∫xdFX(x) =

∫P(X > x)dx.

7

Page 8: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distortion of values versus distortion of probabilities

0 1 2 3 4 5 6

0.00.2

0.40.6

0.81.0

Calcul de l’esperance d’utilité

Fig. 2 – Expected utility∫u(x)dFX(x).

8

Page 9: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distortion of values versus distortion of probabilities

0 1 2 3 4 5 6

0.00.2

0.40.6

0.81.0

Calcul de l’intégrale de Choquet

Fig. 3 – Distorted probabilities∫g(P(X > x))dx.

9

Page 10: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distorted risk measures and quantiles

Equivalently, note that E(X) =∫ 1

0F−1X (1− u)du, and

Rg(X) =∫ 1

0F−1X (1− u)dgu.

A very general class of risk measures can be defined as follows,

Rg(X) =∫ 1

0

F−1X (1− u)dgu

where g is a distortion function, i.e. increasing, with g(0) = 0 and g(1) = 1.

Note that g is a cumulative distribution function, so Rg(X) is a weighted sum ofquantiles, where dg(1− ·) denotes the distribution of the weights.

10

Page 11: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, VaR (quantile) − cdf

1 − probability level

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, TVaR (expected shortfall) − cdf

1 − probability level

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, cdf

1 − probability level

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, VaR (quantile) − pdf

1 − probability level

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

6

Distortion function, TVaR (expected shortfall) − pdf

1 − probability level

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

Distortion function, pdf

1 − probability level

Fig. 4 – Distortion function, g and dg

11

Page 12: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

• General introduction

Risk measures

• Distorted risk measures• Value-at-Risk and related risk measures

Quantile estimation : classical techniques

• Parametric estimation• Semiparametric estimation, extreme value theory• Nonparametric estimation

Quantile estimation : use of Beta kernels

• Beta kernel estimation• Transforming observations

A simulation based study

12

Page 13: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a parametric approach

If FX ∈ F = {Fθ, θ ∈ Θ} (assumed to be continuous), qX(α) = F−1θ (α), and thus,

a natural estimator isqX(α) = F−1

θ(α), (1)

where θ is an estimator of θ (maximum likelihood, moments estimator...).

13

Page 14: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using the Gaussian distribution

A natural idea (that can be found in classical financial models) is to assumeGaussian distributions : if X ∼ N (µ, σ), then the α-quantile is simply

q(α) = µ+ Φ−1(α)σ,

where Φ−1(α) is obtained in statistical tables (or any statistical software), e.g.u = −1.64 if α = 90%, or u = −1.96 if α = 95%.

Definition 1. Given a n sample {X1, · · · , Xn}, the (Gaussian) parametricestimation of the α-quantile is

qn(α) = µ+ Φ−1(α)σ,

14

Page 15: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a parametric models

Actually, is the Gaussian model does not fit very well, it is still possible to useGaussian approximation

If the variance is finite, (X − E(X))/σ might be closer to the Gaussiandistribution, and thus, consider the so-called Cornish-Fisher approximation, i.e.

Q(X,α) ∼ E(X) + zα√V (X), (2)

where

zα = Φ−1(α)+ζ16

[Φ−1(α)2−1]+ζ224

[Φ−1(α)3−3Φ−1(α)]− ζ21

36[2Φ−1(α)3−5Φ−1(α)],

where ζ1 is the skewness of X, and ζ2 is the excess kurtosis, i.e. i.e.

ζ1 =E([X − E(X)]3)

E([X − E(X)]2)3/2and ζ2 =

E([X − E(X)]4)E([X − E(X)]2)2

− 3. (3)

15

Page 16: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a parametric models

Definition 2. Given a n sample {X1, · · · , Xn}, the Cornish-Fisher estimation ofthe α-quantile is

qn(α) = µ+ zασ, where µ =1n

n∑i=1

Xi and σ =

√√√√ 1n− 1

n∑i=1

(Xi − µ)2,

and

zα = Φ−1(α)+ζ16

[Φ−1(α)2−1]+ζ224

[Φ−1(α)3−3Φ−1(α)]− ζ21

36[2Φ−1(α)3−5Φ−1(α)],

(4)where ζ1 is the natural estimator for the skewness of X, and ζ2 is the natural

estimator of the excess kurtosis, i.e. ζ1 =

√n(n− 1)n− 2

√n∑ni=1(Xi − µ)3

(∑ni=1(Xi − µ)2)3/2

and

ζ2 = n−1(n−2)(n−3)

((n+ 1)ζ ′2 + 6

)where ζ ′2 = n

∑ni=1(Xi−µ)4

(∑ni=1(Xi−µ)2)2 − 3.

16

Page 17: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Parametrics estimator and error model

0 1 2 3 4 5

0.0

0.2

0.4

0.6

0.8

Density, theoritical versus empirical

Theoritical lognormalFitted lognormalFitted gamma

−4 −2 0 2 4

0.0

0.1

0.2

0.3

Density, theoritical versus empirical

Theoritical StudentFitted lStudentFitted Gaussian

Fig. 5 – Estimation of Value-at-Risk, model error.

17

Page 18: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a semiparametric models

Given a n-sample {Y1, . . . , Yn}, let Y1:n ≤ Y2:n ≤ . . .≤ Yn:n denotes the associatedorder statistics.

If u large enough, Y − u given Y > u has a Generalized Pareto distribution withparameters ξ and β ( Pickands-Balkema-de Haan theorem).

If u = Yn−k:n for k large enough, and if ξ>0, denote by βk and ξk maximumlikelihood estimators of the Genralized Pareto distribution of sample{Yn−k+1:n − Yn−k:n, ..., Yn:n − Yn−k:n},

Q(Y, α) = Yn−k:n +βk

ξk

((nk

(1− α))−ξk

− 1

), (5)

An alternative is to use Hill’s estimator if ξ > 0,

Q(Y, α) = Yn−k:n

(nk

(1− α))−ξk

, ξk =1k

k∑i=1

log Yn+1−i:n − log Yn−k:n. (6)

18

Page 19: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

On nonparametric estimation for quantiles

For continuous distribution q(α) = F−1X (α), thus, a natural idea would be to

consider q(α) = F−1X (α), for some nonparametric estimation of FX .

Definition 3. The empirical cumulative distribution function Fn, based on

sample {X1, . . . , Xn} is Fn(x) =1n

n∑i=1

1(Xi ≤ x).

Definition 4. The kernel based cumulative distribution function, based onsample {X1, . . . , Xn} is

Fn(x) =1nh

n∑i=1

∫ x

−∞k

(Xi − th

)dt =

1n

n∑i=1

K

(Xi − xh

)

where K(x) =∫ x

−∞k(t)dt, k being a kernel and h the bandwidth.

19

Page 20: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Smoothing nonparametric estimators

Two techniques have been considered to smooth estimation of quantiles, eitherimplicit, or explicit.

• consider a linear combinaison of order statistics,

The classical empirical quantile estimate is simply

Qn(p) = F−1n

(i

n

)= Xi:n = X[np]:n where [·] denotes the integer part. (7)

The estimator is simple to obtain, but depends only on one observation. Anatural extention will be to use - at least - two observations, if np is not aninteger. The weighted empirical quantile estimate is then defined as

Qn(p) = (1− γ)X[np]:n + γX[np]+1:n where γ = np− [np].

20

Page 21: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

24

68

The quantile function in R

probability level

quan

tile

leve

l

type=1type=3type=5type=7

0.0 0.2 0.4 0.6 0.8 1.0

23

45

67

The quantile function in R

probability levelqu

antil

e le

vel

●●

●●●

●●●

●●●

●●

●●

●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●

●●

●●

●●

●●●

●●●

●●●

●●

●●

●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●

●●

●●

type=1type=3type=5type=7

Fig. 6 – Several quantile estimators in R.

21

Page 22: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Smoothing nonparametric estimators

In order to increase efficiency, L-statistics can be considered i.e.

Qn (p) =n∑i=1

Wi,n,pXi:n =n∑i=1

Wi,n,pF−1n

(i

n

)=∫ 1

0

F−1n (t) k (p, h, t) dt (8)

where Fn is the empirical distribution function of FX , where k is a kernel and h abandwidth. This expression can be written equivalently

Qn (p) =n∑i=1

[∫ in

(i−1)n

k

(t− ph

)dt

]X(i) =

n∑i=1

[IK

(in − ph

)− IK

(i−1n − ph

)]X(i)

(9)

where again IK (x) =∫ x

−∞k (t) dt. The idea is to give more weight to order

statistics X(i) such that i is closed to pn.

22

Page 23: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 7 – Quantile estimator as wieghted sum of order statistics.

23

Page 24: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 8 – Quantile estimator as wieghted sum of order statistics.

24

Page 25: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 9 – Quantile estimator as wieghted sum of order statistics.

25

Page 26: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 10 – Quantile estimator as wieghted sum of order statistics.

26

Page 27: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 11 – Quantile estimator as wieghted sum of order statistics.

27

Page 28: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Smoothing nonparametric estimators

E.g. the so-called Harrell-Davis estimator is defined as

Qn(p) =n∑i=1

[∫ in

(i−1)n

Γ(n+ 1)Γ((n+ 1)p)Γ((n+ 1)q)

y(n+1)p−1(1− y)(n+1)q−1

]Xi:n,

• find a smooth estimator for FX , and then find (numerically) the inverse,

The α-quantile is defined as the solution of FX ◦ qX(α) = α.

If Fn denotes a continuous estimate of F , then a natural estimate for qX(α) isqn(α) such that Fn ◦ qn(α) = α, obtained using e.g. Gauss-Newton algorithm.

28

Page 29: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

• General introduction

Risk measures

• Distorted risk measures• Value-at-Risk and related risk measures

Quantile estimation : classical techniques

• Parametric estimation• Semiparametric estimation, extreme value theory• Nonparametric estimation

Quantile estimation : use of Beta kernels

• Beta kernel estimation• Transforming observations

A simulation based study

29

Page 30: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Kernel based estimation for bounded supports

Classical symmetric kernel work well when estimating densities withnon-bounded support,

fh(x) =1nh

n∑i=1

k

(x−Xi

h

),

where k is a kernel function (e.g. k(ω) = I(|ω| ≤ 1)/2).

If K is a symmetric kernel, note that

E(fh(0) =12f(0) +O(h)

30

Page 31: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Kernel based estimation of the uniform density on [0,1]

Dens

ity

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Kernel based estimation of the uniform density on [0,1]

De

nsity

Fig. 12 – Density estimation of an uniform density on [0, 1].

31

Page 32: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Kernel based estimation for bounded supports

Several techniques have been introduce to get a better estimation on the border,

– boundary kernel (Muller (1991))– mirror image modification (Deheuvels & Hominal (1989), Schuster

(1985))– transformed kernel (Devroye & Gyrfi (1981), Wand, Marron &

Ruppert (1991))– Beta kernel (Brown & Chen (1999), Chen (1999, 2000)),

see Charpentier, Fermanian & Scaillet (2006) for a survey withapplication on copulas.

32

Page 33: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Beta kernel estimators

A Beta kernel estimator of the density (see Chen (1999)) - on [0, 1] is

fb(x) =1n

n∑i=1

k

(Xi, 1 +

x

b, 1 +

1− xb

), x ∈ [0, 1],

where k(u, α, β) =uα−1(1− u)β−1

B(α, β), u ∈ [0, 1].

If {X1, · · · , Xn} are i.i.d. variables with density f0, if n→∞, b→ 0, thenBouzmarni & Scaillet (2005)

fb(x)→ f0(x), x ∈ [0, 1].

This is the Beta 1 estimator.

33

Page 34: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

05

1015

Beta kernel, x=0.05

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

Beta kernel, x=0.10

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

Beta kernel, x=0.20

0.0 0.2 0.4 0.6 0.8 1.00

24

68

Beta kernel, x=0.45

Fig. 13 – Shape of Beta kernels, different x’s and b’s.

34

Page 35: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Improving Beta kernel estimators

Problem : the convergence is not uniform, and there is large second order biason borders, i.e. 0 and 1.

Chen (1999) proposed a modified Beta 2 kernel estimator, based on

k2 (u; b; t) =

k t

b ,1−t

b(u) , if t ∈ [2b, 1− 2b]

kρb(t),1−t

b(u) , if t ∈ [0, 2b)

k tb ,ρb(1−t) (u) , if t ∈ (1− 2b, 1]

where ρb (t) = 2b2 + 2.5−√

4b4 + 6b2 + 2.25− t2 − t

b.

35

Page 36: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Non-consistency of Beta kernel estimators

Problem : k(0, α, β) = k(1, α, β) = 0. So if there are point mass at 0 or 1, theestimator becomes inconsistent, i.e.

fb(x) =1n

∑k

(Xi, 1 +

x

b, 1 +

1− xb

), x ∈ [0, 1]

=1n

∑Xi 6=0,1

k

(Xi, 1 +

x

b, 1 +

1− xb

), x ∈ [0, 1]

=n− n0 − n1

n

1n− n0 − n1

∑Xi 6=0,1

k

(Xi, 1 +

x

b, 1 +

1− xb

), x ∈ [0, 1]

≈ (1− P(X = 0)− P(X = 1)) · f0(x), x ∈ [0, 1]

and therefore Fb(x) ≈ (1− P(X = 0)− P(X = 1)) · F0(x), and we may haveproblem finding a 95% or 99% quantile since the total mass will be lower.

36

Page 37: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Non-consistency of Beta kernel estimators

Gourieroux & Monfort (2007) proposed

f(1)b (x) =

fb(x)∫ 1

0fb(t)dt

, for all x ∈ [0, 1].

It is called macro-β since the correction is performed globally.

Gourieroux & Monfort (2007) proposed

f(2)b (x) =

1n

n∑i=1

kβ(Xi; b;x)∫ 1

0kβ(Xi; b; t)dt

, for all x ∈ [0, 1].

It is called micro-β since the correction is performed locally.

37

Page 38: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Transforming observations ?

In the context of density estimation, Devroye and Gy’orfi (1985) suggestedto use a so-called transformed kernel estimate

Given a random variable Y , if H is a strictly increasing function, then thep-quantile of H(Y ) is equal to H(q(Y ; p)).

An idea is to transform initial observations {X1, · · · , Xn} into a sample{Y1, · · · , Yn} where Yi = H(Xi), and then to use a beta-kernel based estimator, ifH : R→ [0, 1]. Then qn(X; p) = H−1(qn(Y ; p)).

In the context of density estimation fX(x) = fY (H(x))H ′(x). As mentioned inDevroye and Gyorfi (1985) (p 245), “for a transformed histogram histogramestimate, the optimal H gives a uniform [0, 1] density and should therefore beequal to H(x) = F (x), for all x”.

38

Page 39: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Transforming observations ? a monte carlo study

Assume that sample {X1, · · · , Xn} have been generated from Fθ0 (from a famillyF = (Fθ, θ ∈ Θ). 4 transformations will be considered– H = Fθ (based on a maximum likelihood procedure)– H = Fθ0 (theoritical optimal transformation)– H = Fθ with θ < θ0 (heavier tails)– H = Fθ with θ > θ0 (lower tails)

39

Page 40: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 14 – F (Xi) versus Fθ(Xi), i.e. PP plot.

40

Page 41: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 15 – Nonparametric estimation of the density of the Fθ(Xi)’s.

41

Page 42: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●●

0.80 0.85 0.90 0.95 1.00

1.0

1.5

2.0

2.5

3.0

3.5

4.0

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

0.80 0.85 0.90 0.95 1.00

12

34

5

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 16 – Nonparametric estimation of the quantile function, F−1

θ(q).

42

Page 43: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 17 – F (Xi) versus Fθ0(Xi), i.e. PP plot.

43

Page 44: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 18 – Nonparametric estimation of the density of the Fθ0(Xi)’s.

44

Page 45: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

0.80 0.85 0.90 0.95 1.00

12

34

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

0.80 0.85 0.90 0.95 1.00

12

34

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 19 – Nonparametric estimation of the quantile function, F−1θ0

(q).

45

Page 46: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 20 – F (Xi) versus Fθ(Xi), i.e. PP plot, θ < θ0 (heavier tails).

46

Page 47: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 21 – Estimation of the density of the Fθ(Xi)’s, θ < θ0 (heavier tails).

47

Page 48: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

0.80 0.85 0.90 0.95 1.00

24

68

1012

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

0.80 0.85 0.90 0.95 1.00

24

68

1012

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 22 – Estimation of quantile function, F−1θ (q), θ < θ0 (heavier tails).

48

Page 49: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 23 – F (Xi) versus Fθ(Xi), i.e. PP plot, θ > θ0 (lighter tails).

49

Page 50: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 24 – Estimation of density of Fθ(Xi)’s, θ > θ0 (lighter tails).

50

Page 51: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●●

0.80 0.85 0.90 0.95 1.00

1.0

1.5

2.0

2.5

3.0

3.5

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●●

0.80 0.85 0.90 0.95 1.00

1.0

1.5

2.0

2.5

3.0

3.5

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 25 – Estimation of quantile function, F−1θ (q), θ > θ0 (lighter tails).

51

Page 52: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

A universal distribution for losses

BNGB considered the Champernowne generalized distribution to modelinsurance claims, i.e. positive variables,

Fα,M,c (y) =(y + c)α − cα

(y + c)α + (M + c)α − 2cαwhere α > 0, c ≥ 0 and M > 0.

The associated density is then

fα,M,c (y) =α (y + c)α−1 ((M + c)α − cα)

((y + c)α + (M + c)α − 2cα)2.

52

Page 53: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

A Monte Carlo study to compare those nonparametric

estimators

As in ....., 4 distributions were considered– normal distribution,– Weibull distribution,– log-normal distribution,– mixture of Pareto and log-normal distributions,

53

Page 54: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

5 10 15 20

0.0

00

.0

50

.1

00

.1

50

.2

00

.2

50

.3

0

Density of quantile estimators (mixture longnormal/pareto)

Estimated value−at−risk

de

nsity o

f e

stim

ato

rs

Benchmark (R estimator)HD (Harrell−Davis)PRK (Park)B1 (Beta 1)B2 (Beta 2)

Box−plot for the 11 quantile estimators

5 10 15 20

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●●●●● ●●●●●● ●● ● ●

●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ●● ● ●

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●●●●●●●●●● ● ●●● ● ● ●●● ●●● ● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●●●●●● ●● ● ● ● ●

MICRO Beta2

MACRO Beta2

Beta2

MICRO Beta1

MACRO Beta1

Beta1

PRK Park

PDG Padgett

HD Harrell Davis

E Epanechnikov

R benchmark

Fig. 26 – Distribution of the 95% quantile of the mixture distribution, n = 200,and associated box-plots.

54

Page 55: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

MSE ratio, normal distribution, HD (Harrell−Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ● ● ● ● ● ●

n= 50n=100n=200n=500

MSE ratio, normal distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●●

●●●

n= 50n=100n=200n=500

MSE ratio, normal distribution, MACB1 (MACRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ●

●●●

n= 50n=100n=200n=500

MSE ratio, normal distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

n= 50n=100n=200n=500

MSE ratio, normal distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●●

●●●

n= 50n=100n=200n=500

MSE ratio, normal distribution, MACB1 (MACRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ●

●●●

n= 50n=100n=200n=500

Fig. 27 – Comparing MSE for 6 estimators, the normal distribution case.

55

Page 56: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

MSE ratio, Weibull distribution, HD (Harrell−Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ● ● ●●

n= 50n=100n=200n=500

MSE ratio, Weibull distribution, MACB1 (MACRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●●

●●

n= 50n=100n=200n=500

MSE ratio, Weibull distribution, MICB1 (MICRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●●

●●

n= 50n=100n=200n=500

MSE ratio, Weibull distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0 ●

●●

n= 50n=100n=200n=500

MSE ratio, Weibull distribution, MACB1 (MACRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●●

●●

n= 50n=100n=200n=500

MSE ratio, Weibull distribution, MICB1 (MICRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●●

●●

n= 50n=100n=200n=500

Fig. 28 – Comparing MSE for 6 estimators, the Weibull distribution case.

56

Page 57: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

MSE ratio, lognormal distribution, HD (Harrell−Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ● ● ● ● ●

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, MACB1 (MACRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

● ● ●●

● ●●

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, MICB1 (MICRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ●

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ●●

●●

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, MACB2 (MACRO−Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ● ●

● ●● ●

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, MICB2 (MICRO−Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

●● ● ● ●

●●

n= 50n=100n=200n=500

MSE ratio, lognormal distribution, B2 (Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ●

● ● ●●

n= 50n=100n=200n=500

Fig. 29 – Comparing MSE for 9 estimators, the lognormal distribution case.

57

Page 58: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

MSE ratio, mixture distribution, HD (Harrell−Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ● ● ●

●●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, MACB1 (MACRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

●●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, MICB1 (MICRO−Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●

● ●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, MACB2 (MACRO−Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ● ●

● ●

●●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, MICB2 (MICRO−Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

● ● ● ●

●●

n= 50n=100n=200n=500

MSE ratio, mixture distribution, B2 (Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●

●●

n= 50n=100n=200n=500

Fig. 30 – Comparing MSE for 9 estimators, the mixture distribution case.

58

Page 59: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Portfolio optimal allocation

Classical problem as formulated in Markowitz (1952), Journal of Finance, ω∗ ∈ argmin{ω′Σω}u.c. ω′µ ≥ η and ω′1 = 1

convex⇔

ω∗ ∈ argmax{ω′µ}u.c. ω′Σω ≤ η′ and ω′1 = 1

Roy (1952), Econometrica,“the optimal bundle of assets (investment) forinvestors who employ the safety first principle is the portfolio that minimizes theprobability of disaster”.

ω∗ ∈ argmin{VaR(ω′X, α)}u.c. E(ω′X) ≥ η and ω′1 = 1

nonconvex<

ω∗ ∈ argmax{E(ω′X)}u.c. VaR(ω′X, α) ≤ η′,ω′1 = 1

59

Page 60: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Empirical data, Eurostocks

DAX (1)

−0.05 0.05

●●

●●

●●●●● ●●

●●●

●●

●●●

●●●

●● ●●

●●

●●●

●●

● ●●●

●●●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

● ●●●

●●

●●●

●●●●●

● ●

●●

●●

●●

●●●

●●

●●

● ●●●●

●●

●●

●●● ●

●●

●●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

● ●●●●

●●

● ●●

●●

● ●

●●

●●●

●●

●●

●● ●● ●●●●

●●

●●●●

●●●

●●

●●

●●●●●●●

●●

●●

●●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●●

●●●

● ●● ●

●●

●●

●●

●●

●● ●●●●●

●●●

●●

●●●

●●●

●●●●

●●●

●● ●

●●●●●●

●●●● ●●

●●

● ●

●●

●●●

●●

●●

●●

●●

●●

●●●●

●●

●●●

●● ●● ●

● ●

●●

●●

●●

● ●●

●●

●●

●● ●●●

●●

●●

●●●●

●●

●●

●●●

●●

●●

● ●

●●

●●

● ●

●●

●●

●●

●●

●●●

●● ●

●●

●●●●●

● ●

●●●

●●

● ●

●●

●●●

●●

●●

●●●●● ●●

●●

●●

● ●●●●

●●●

●●

●●

●●●●●●

●●

●●

●●

●●

● ●

● ●

●●

●●

● ●

●●

●●

●● ●

●●●

●●●

●●

●●

−0.05 0.05

−0.05

0.05

●●

●●

●●●● ●●●

●●●

●●●

●●●

●●●

●●●●

●●●

●● ●

●●

●●●●

●●●●●●

●●

● ●

●●

●●●

●●

●●

●●

●●

●●● ●●●

●●

●●●

●● ●●●

●●

●●

●●

●●

●●●

●●

●●

● ●●●●

●●

●●

●●● ●

●●

●●

●● ●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●●

●●

●●● ●●●●

● ●●

●●

● ●

●●

●●●●

●●

●●

●●●●● ●●

●●

●●

●●●●

●●●

●●

●●

● ●●●●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

●●

●●●

●●●

●● ●●

●●

●●

−0.05

0.05

●● ●●

●●●●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●● ●●●●●

●● ●● ●●

●●

●●● ●●

●●

●●

●●

●●●

●●●

●●

●●●

● ●

●●

●●●

●●

●●

●●●

●●

●● ●●

● ●

●●

●● ●

●●●

● ●●

● ●●

●●●

●●●● ●●

●●

● ●●●●

●●●

●●● ●

●●●●

●● ●

●●

●●

●●●●

●●●

●●●

● ●

●●

●●●●

●●●●●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●●

●●

●●

●●

SMI (2) ●

●●●●

●● ●●●

●●

●●

●●

● ●

●●●

●●

●●

●●

● ●●●

●●

●●

●●●●

●●

●●

●●

● ●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●●● ●● ●

●●

●●● ●●●

● ●

●● ●●

●●

●●

●●

●●●

●●●

●●

●●●

●●

● ●

●●●

●●

●●

●●●

● ●

● ●●●

●●

● ●

●● ●

●●●

● ●●● ●

●● ●●

●● ●● ●●

●●

●●●

●●●●

●●● ●

●●●●

●●●

●●

●●

●●●●

●●●

●●●

●●

●●

● ●●●

●●●●●● ●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●●●

●●●● ●

●●

●●

●●

● ●

●●●

●●●●

●●

● ●● ●

●●

●●

●●●●

●●●

●●

● ●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●●● ●●●

●●

●●●● ●●

●●

●● ●●●

●●

● ●

●●

● ●●

●●●

●●

●●●

● ●

● ●

● ●●

●●

●●

●●●

●●

●●●●

●●

●●

●●●

● ●●

● ●●

● ●●

●●●

●●●●●●●

●●

●●●

●●●●

●●●●

●●●●

●●●

●●

●●

●●●●

●●●

●●●

●●

●●

●●●●

● ●●●●● ●

● ●

●●●●

●●

●●

●●

●●

●●

●●

● ●

●●

●●●

● ●●

●●

●●

●●

●● ●●

●●●●●●●

●●●

●●

● ●

●● ●

●●●●

●●

●●

●●

●●●

● ●

●●

●●●●

●●

●●

● ●

●●

●●

●●

●●

●●●

●●

●●

●●

● ●●

●●

●●

●●

●●●●

● ●●

●●

●●

●●●

●●

●●

●●●●

●●

●●

● ●

●●●

●●●

●●

●● ●

●●

●●

●●

●●

●●

● ● ●●●

●●

●●●

●●

●●

●●

●● ●

●●●

●●●●

●●●

●●

●●●

● ●●

●●●●●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●●● ●●

●●

●●

● ●

● ●

●●

●●

●●

●●

●●●●

● ●

●●●●

●●●●●

●●

●●●

●●

● ●

●●●

●●

●●

●●

● ●

●●

●●●●●

●●●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

● ●

● ●●●

● ● ●

●●

●●

●●●

●●

●●

●●●●

●●

●●

●●

●●

●●●

●●

●● ●

●●

●●

●●

●●

●●

●● ●●●

●●

●●●

●●

●●

●●

●●●

●●●●

●●

●●●●●

●●

●●●

●●●

● ●●● ●● ●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●● ●●●

●●

●●

● ●

● ●

● ●

●●

●●

●●●

● ●●●

CAC (3)

−0.10

0.00

0.10

●●●●

●●

●●●

●●

●●●

●●● ●

●● ●

●●

●●

●●

● ●

●●

●●●

● ●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●●

●●

●●

●●

●●● ●

●●●

●●

●●

●●●

●●

●●

●●●●

●●

●●

● ●

●●

● ●●

●●

●● ●

●●

●●

●●

●●

●●

●● ●●●

●●

●● ●

●●

●●

●●

●● ●

●●●

●●

●●●

●●

●●

●●●●●●

● ●●●●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●●●●

●●

●●

● ●

● ●

●●

●●

●●

●●

●● ●●

−0.05 0.05

−0.05

0.05

●●

●●●

●●●

●●●●●

●●●●

●●

●●

●●

●●●

●●

●●●●●● ●

●●

● ●

●●●

● ●

●●

●●

●●

●●●

●●

●●

●●●●● ●

● ●

●●●

●●

●●●

●●●●

●● ●●

● ●●●

●●

●●

●●●

● ●●● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●● ●●●

●●

●●●●

● ●●●●

●●

● ●●●

●●●

●● ●

●●

●●

●●●● ●

●●●

●●●●

●●

●● ●

●●●

●●

●●

●●

●●

●●●

●●●

●● ●

●● ●●

● ●

●●

●● ●

●●

●●

●●

●●

●●

●● ●

●●

●●

●●● ●

●●

●●

●●

●●

●●●

●●●

●●●

●●

●● ●●●

●●

●●

●●

●●●

●●

●●●● ●●●

●●

● ●

●●

● ●

● ●

●●

●●

●●●

●●

●●●●●

●● ●●●

●●●

●●

●● ●

●●●

● ● ●●

● ● ●●

●●

●●

●●●

●●●● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●● ●●●

●●

●●●●

● ●●

●●●●

● ●●●

●●●●

●● ●

●●●

●●

●●●●●

●●●

●● ●●

●●

●● ●

●●●

●●●

●●

●●

●●

●●

●●●

●● ●

●●●●

●●

● ●

●● ●

●●

●●

●●

●●

●●

●● ●

●●

●●

●●● ●

● ●

●●

●●

−0.10 0.00 0.10

●●

●●●

●●

●●●●●

●● ●●

●●

●●

●●

●●●

●●

●●●● ● ●●

●●

●●

●●

● ●

●●

●●

●●

●● ●

●●

●●

●● ●●● ●

●●

●●●

●●

●● ●

●●●

●●●●●●● ●

●●

●●

●●●

● ●● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●●

●●

●●

●●

● ●●●●

●●

●●● ●

● ●●

●●●

● ●●●

●●●

●● ●

●●

●●

●●●●●

●●●●

●●●●

●●

●● ●

●●●

●●

●●

●●

●●

●●

●●●

●● ●

●●●●

● ●

● ●

● ● ●

●●

●●

●●

●●

●●

●● ●

●●

●●

●●● ●

●●

●●

●●

FTSE (4)

Fig. 31 – Scatterplot of log-returns.

60

Page 61: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Alloca

tion (

3rd as

set)

Allocation (4th asset)

Quantile

Value−at−Risk (75%) on the grid Value−at−Risk (75%) on the grid

Allocation (3rd asset)

Alloc

ation

(4th

asse

t)

−3 −2 −1 0 1 2

−3−2

−10

12

Alloca

tion (

3rd as

set)

Allocation (4th asset)

Quantile

Value−at−Risk (97.5%) on the grid Value−at−Risk (97,5%) on the grid

Allocation (3rd asset)

Alloc

ation

(4th

asse

t)

−3 −2 −1 0 1 2−3

−2−1

01

2

Fig. 32 – Value-at-Risk for all possible allocations on the grid G (surface and levelcurves), with α = 75% on the left and α = 97.5% on the right.

61

Page 62: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

−1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 1)

Probability level (97.5%−75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

97.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

−1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 2)

Probability level (97.5%−75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

97.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

−1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 3)

Probability level (97.5%−75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●97.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

●●

−1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 4)

Probability level (97.5%−75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

97.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

Fig. 33 – Optimal allocations for different probability levels (α =75%, 77.5%, 80%, ..., 95%, 97.5%), with allocation for the first asset (top left) upto the fourth asset (bottom right). 62

Page 63: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

mean 75% 77.5% 80% 82.5% 85% 87.5% 90% 92.5% 95% 97.5%

variance

asset 1 0.2277 0.222(0.253)

0.206(0.244)

0.215(0.259)

0.251(0.275)

0.307(0.276)

0.377(0.241)

0.404(0.243)

0.394(0.224)

0.402(0.214)

0.339(0.268)

asset 2 0.5393 0.550(0.141)

0.558(0.136)

0.552(0.144)

0.530(0.152)

0.500(0.154)

0.460(0.134)

0.444(0.135)

0.448(0.124)

0.441(0.121)

0.467(0.151)

asset 3 −0.2516 −0.062(0.161)

−0.083(0.176)

−0.106(0.184)

−0.139(0.187)

−0.163(0.215)

−0.196(0.203)

−0.228(0.163)

−0.253(0.141)

−0.310(0.184)

−0.532(0.219)

asset 4 0.4846 0.289(0.162)

0.319(0.179)

0.339(0.191)

0.357(0.204)

0.357(0.221)

0.359(0.205)

0.380(0.175)

0.410(0.153)

0.466(0.170)

0.726(0.200)

Tab. 1 – Mean and standard deviation of estimated optimal allocation, for differentquantile levels.

1. raw estimator Q(Y, α) = Y[α·n]:n

2. mixture estimator Q(Y, α) =n∑i=1

λi(α)Yi:n, which is the standard quantile

estimate in R (see [?]),

3. Gaussian estimator Q(Y, α) = Y + z1−αsd(Y , where sd denotes the empiricalstandard deviation,

63

Page 64: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

4. Hill’s estimator, with k = [n/5], Q(Y, α) = Yn−k:n

(nk

(1− α))−ξk

, where

ξk =1k

k∑i=1

logYn+1−i:n

Yn−k:n(assuming that ξ > 0),

5. kernel based estimator is obtained as a mixture of smoothed quantiles,derived as inverse values of a kernel based estimator of the cumulative

distribution function, i.e. Q(Y, α) =n∑i=1

λi(α)F−1(i/n).

64

Page 65: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●−3

−2−1

01

2

Optimal allocation (asset 1)

Quantile estimator

weigh

t of a

llocatio

n ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

−3−2

−10

12

Optimal allocation (asset 2)

Quantile estimator

weigh

t of a

llocatio

n ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

−3−2

−10

12

Optimal allocation (asset 3)

Quantile estimator

weigh

t of a

llocatio

n

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

−3−2

−10

12

Optimal allocation (asset 4)

Quantile estimator

weigh

t of a

llocatio

n ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

Fig. 34 – Optimal allocations for different 95% quantile estimators, with allocationfor the first asset (top left) up to the fourth asset (bottom right).

65

Page 66: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Some references

Charpentier, A. & Oulidi, A. (2007). Beta Kernel estimation forValue-At-Risk of heavy-tailed distributions. in revision Journal of ComputationalStatistics and Data Analysis.

Charpentier, A. & Oulidi, A. (2007). Estimating allocations forValue-at-Risk portfolio optimzation. to appear in Mathematical Methods inOperations Research.

Chen, S. X. (1999). A Beta Kernel Estimator for Density Functions.Computational Statistics & Data Analysis, 31, 131-145.

Gourieroux, C., & Montfort, A. 2006. (Non) Consistency of the BetaKernel Estimator for Recovery Rate Distribution. CREST-DP 2006-32.

66