Edgeworth expansion for an estimator of the …n sup Ykn y 1 log Le(y(1U nkn,n)) Le kn n. Moreover,...
Transcript of Edgeworth expansion for an estimator of the …n sup Ykn y 1 log Le(y(1U nkn,n)) Le kn n. Moreover,...
Edgeworth expansion for an estimator of the
adjustment coe�cient
Margarida Brito ⇤
Centro de Matematica & Departamento de Matematica AplicadaFac. Ciencias, Univ. Porto
Ana Cristina Moreira FreitasCentro de Matematica & Fac. Economia
Univ. Porto
Abstract
We establish an Edgeworth expansion for an estimator of the ad-
justment coe�cient R, directly related to the geometric-type estimator
for general exponential tail coe�cients, proposed in Brito and Freitas
(2003). Using the first term of the expansion, we construct more
accurate confidence bounds for R. The accuracy of the approxima-
tion is illustrated using an example from insurance (cf. Schultze and
Steinebach (1996)).
Key words and phrases: Adjustment coe�cient; Edgeworth expan-
sions; Parameter estimation; Sparre Andersen model; Tail index
JEL Classification: C13, C14
AMS 2000 Classification: 62G05, 62P05
⇤Corresponding author. Departamento de Matematica Aplicada, Faculdade de Ciencias
da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto, Portugal. Tel:
+351-220402202. Fax: +351-220402209. E-mail: [email protected].
1
1 Introduction
Let Z1, Z2, . . . be independent, nonnegative random variables (r.v.’s) withcommon distribution function (d.f.) F satisfying
1� F (z) = P (Z1 > z) = r(z)e�Rz, z > 0, (1)
where r is a regularly varying function at infinity and R is a positive con-stant. Denoting by F�1 the left continuous inverse of F , i.e., F�1(s) :=inf{x : F (x) � s}, (1) is equivalent to
F�1(1� s) = � 1
Rlog s+ log eL(s), 0 < s < 1,
where eL is a slowly varying function at zero (see e.g. Schultze and Steinebach(1996) and the references therein).
The problem of estimating R or related tail indices has received consider-able attention. Several estimators for R have been proposed in the literature(see e.g. Dekkers et al. (1989), Bacro and Brito (1995), Csorgo and Viharos(1998) and references therein). Common applications may be found in a bigvariety of domains, as for example in economics, insurance, telecommunica-tions, tra�c, geology and sociology. We shall consider here the problem ofestimating the adjustment coe�cient in risk theory.
Based on least squares considerations, Schultze and Steinebach (1996)proposed three estimators, denoted by bR1(kn), bR2(kn) and bR3(kn), for thetail coe�cient R. Brito and Freitas (2003) have introduced a geometric-type
estimator of R, bR(kn), which is related to the least squares estimators bR1(kn)and bR3(kn) and defined by
bR(kn) =
pin(kn)r
1kn
Pkni=1 Z
2n�i+1,n �
⇣1kn
Pkni=1 Zn�i+1,n
⌘2 ,
where
in(kn) :=1
kn
knX
i=1
log2(n/i)�
1
kn
knX
i=1
log(n/i)
!2
,
Z1,n Z2,n . . . Zn,n denote the order statistics of the sample Z1, Z2 . . . , Zn
and kn is a sequence of positive integers satisfying
1 kn < n, limn!1
kn = 1 and limn!1
kn/n = 0. (2)
An important application in risk theory was given by Schultze and Steinebach(1996). In particular, these authors establish the consistency of bR1(kn) andbR3(kn) for estimating the adjustment coe�cient for a subclass of the Sparre
2
Andersen model. More recently, Brito and Freitas (2006) have shown thatthe consistency properties of all these estimators still hold in the SparreAndersen model under standard conditions.
It is also known that these estimators, when properly normalized, areasymptotically normal, with convergence rate of order k
�1/2n . For applica-
tions, it is important to derive higher-order approximations. In this paper,our main goal is the construction of more accurate confidence bounds forR, using asymptotic expansions. We begin by establishing an Edgeworthexpansion for a slightly modified version of bR(kn), given by
eR(kn) =
0
@ 1
kn
knX
i=1
Z2n�i+1,n �
1
kn
knX
i=1
Zn�i+1,n
!21
A�1/2
.
Based on this result, we construct more accurate interval estimates for R.
Another and commonly used estimator for tail coe�cients is the Hillestimator. Cheng and Pan (1998) established the one-term Edgeworth ex-pansion for this popular estimator and constructed more accurate intervalestimates for the tail index. We recall that the least squares type estimatorshave the interesting property of being universal asymptotically normal underthe model (1), a property not shared by the Hill estimator. Also, they areparticularly adequate in situations where the coe�cient R is expected to besmall (cf. Csorgo and Viharos (1998)). This is the case of the applicationconsidered here and constitutes the main motivation for this work.
We also point out that the derived Edgeworth expansion is of indepen-dent interest and may be used for other applications.
We present our results in the following section and the correspondingproofs are given in Section 3. The confidence bounds for R are derived inSection 4. In order to illustrate the finite sample behaviour of the confi-dence intervals for the adjustment coe�cient, we consider, in Section 5, atypical example from insurance and present a small-scale simulation study.In this study, we also include the confidence bounds based on the asymptoticnormality and on the bootstrap approximation.
2 Main results
2.1 General results
In the sequel,D�! and
D= stand, respectively, for convergence and equality in
distribution. In the same way,P�! denotes convergence in probability.
We begin by stating the asymptotic properties of eR(kn), which followdirectly from the known corresponding properties for bR(kn), using the fact
3
that eR(kn) =bR(kn)pin(kn)
, with
in(kn) = 1 +O
✓log2 knkn
◆,
as n ! 1 (cf. Lemma 2 of Brito and Freitas (2003)).In this way, we may conclude that eR(kn) is a consistent estimator of R.The general conditions ensuring the asymptotic normality are stated in
the theorem below. This result follows directly from Proposition 1 of Britoand Freitas (2003).
Theorem 1 Assume that F satisfies (1) and kn is a sequence of positive
integers such that (2) holds. If we suppose that, as n ! 1,
k1/2n sup
1/kny1
�����log eL�ytkn
n
�
eL�knn
�
!�����! 0
uniformly in t on compact subsets of (0,1), then
R2
p8k1/2n
1
eR2(kn)� 1
R2
!D�! N(0, 1).
Using the results established in Brito and Freitas (2003) and Brito andFreitas (2006), it is also easy to conclude that eR(kn) is universal asymp-totically normal under the model (1) and that the tail bootstrap methodintroduced by Bacro and Brito (1998) works for this estimator.
Consider now the normalized estimator
Rn :=R2
p8k1/2n
1
eR2(kn)� 1
R2
!.
We will investigate how close the distribution of Rn is to the normaldistribution. To this end, we will give the asymptotic expansion for the d.f.of the normalized estimator.
Before presenting this result it is convenient to introduce some notation.We assume that U1, U2, . . . is a sequence of independent uniform (0, 1) r.v.’s.The order statistics of the sample (U1, U2, . . . , Un) are denoted by U1,n U2,n . . . Un,n. We denote by W1,W2, . . . ,Wkn the kn exceedances of therandom level Zn�kn,n, that is
Wi := Zn�kn+i,n � Zn�kn,n, 1 i kn.
With the above notation,
eR(kn) =
0
@ 1
kn
knX
i=1
W 2i �
1
kn
knX
i=1
Wi
!21
A�1/2
.
4
Since ZiD= F�1(Ui), i � 1, we write, without loss of generality,
Wi = F�1(Un�kn+i,n)� F�1(Un�kn,n).
As in Bacro and Brito (1998) (cf. Theorem 1) we shall make use of thefollowing equivalent representation for Wi, i = 1, . . . , kn:
Wi = � 1
Rlog Yi + log
eL(Yi(1� Un�kn,n))eL(1� Un�kn,n)
, (3)
where
Yi =1� Un�kn+i,n
1� Un�kn,n
, for i = 1, . . . , kn. (4)
We recall also that (Yi)1ikn is distributed as the vector of the order statisticsof an i.i.d. kn-sample from a uniform (0, 1) distribution. Now, we define
V (kn) := k1/2n sup
Ykny1
�����logeL(y(1� Un�kn,n))
eL�knn
�
����� .
Moreover, we write � for the standard normal d.f. and � for the correspond-ing density.
We present now the Edgeworth expansion for the d.f. of Rn as a smoothfunction of a sample variance of i.i.d. unit exponential r.v.’s.
Theorem 2 Assume that F satisfies condition (1) and kn is a sequence of
positive integers satisfying (2). Let j be a positive integer and suppose that,
for some ✏ > 0,
P⇥V (kn) > ak�j/2�✏
n
⇤= o(k�j/2
n ), 8a > 0. (5)
Then,
P [Rn x] = �(x) + k�1/2n p1(x)�(x) + . . .
+k�j/2n pj(x)�(x) + o
�k�j/2n
�, (6)
uniformly in x, where the function pj is a polynomial of degree at most 3j�1,odd for even j and even for odd j, with coe�cients depending on moments
of the Exp(1) distribution up to order 2(j + 2).
In order to give an example, we will derive the expression of p1 (To com-pute higher order polynomials see for example Hall (1992), Section 2.4). Weuse some calculations obtained by Hall (1992), Section 2.6, and the fact,proved in the next section that, under condition (5) the first j terms of theEdgeworth expansion of Rn are exactly the first j terms of the Edgeworthexpansion of a smooth fuction based on the variance of a kn-sample of i.i.d.
5
r.v.’s with distribution Exp(1). Let T be a r.v. with distribution Exp(1).Then,
p1(x) = (⌧ †)�1 + (⌧ †)�3(⇢2 � �†/6)(x2 � 1), (7)
where
(⌧ †)2 = E(T � E(T ))4(E(T � E(T ))2)�2 � 1 = 8,
⇢ = E(T � E(T ))3(E(T � E(T ))2)�3/2 = 2,
�† = E�(T � E(T ))2(E(T � E(T ))2)�1 � 1
�3= 240.
Simplifying (7), we then obtain
p1(x) =
p2
8(11� 9x2). (8)
So, if (5) holds for j = 1, then we obtain
P [Rn x] = �(x) + k� 1
2n
p2
8(11� 9x2)�(x) + o
�k�1/2n
�.
2.2 Application
Some further notation will be needed before we apply the above result to theestimation of the adjustment coe�cient. We begin by considering the SparreAndersen model, where the claims C1, C2, . . . occur at times T1, T1 + T2, . . .,such that {Ci} and {Ti} are independent sequences of i.i.d. r.v.’s with finitemeans. Denoting by C(t) the total sum of claims up to time t, we have
C(t) =N(t)X
i=1
Ci,
where N(t) is the number of claims observed up to time t. Starting withan initial capital x and compensating the claim process {C(t)} by incomingpremiums with constant rate � > 0 per unit time, the risk reserve of thecompany is defined by
S(t) = x+ �t� C(t).
The probability of ruin is then given by
U(x) = P⇣inft>0
S(t) < 0⌘
= P
supn�1
nX
i=1
(Ci � �Ti) > x
!.
6
Define now i.i.d. r.v.’s by Di := Ci � �Ti for i = 1, 2, . . . and assume thatE(D1) < 0. Denote by Sn the associated random walk, with S0 = 0 andSn = D1 + . . .+Dn, n = 1, 2, . . ..
The distribution of the r.v. D1 considered here is supposed to have asu�ciently regular tail, such that the following conditions hold:
(H1) There exists R > 0 such that E(eRD1) = 1.
(H2) E(| D1 | eRD1) < 1.
The unique positive solution R is called the adjustment coe�cient. It iswell-known that, under (H1), the Lundberg inequality gives
U(x) e�Rx, for all x > 0,
and that, under both conditions (H1) and (H2), the Cramer-Lundberg ap-proximation holds,
U(x) ⇠ ae�Rx, as x ! 1,
where a is a positive constant (see e.g. Rolski et al. (1999)). Di↵erentapproaches have been used for estimating the adjustment coe�cient R (seee.g. Pitts et al. (1996) and references therein). Here, we will follow theapproach suggested by Csorgo and Steinebach (1991) of estimating R bymeans of a sequence of auxiliary r.v.’s {Zk}, recursively defined as follows:
M0 = 0, Mn = max {Mn�1 +Dn, 0} for n = 1, 2, . . . ,⌫0 = 0, ⌫k = min{n � ⌫k�1 + 1 : Mn = 0} for k = 1, 2, . . . ,Zk = max⌫k�1<j⌫k Mj for k = 1, 2, . . . .
Z1, Z2, . . . defines a sequence of i.i.d. r.v.’s. In the context of queueing mod-els the r.v. Zk may be interpreted as the maximum waiting time in the k-thbusy cycle of a GI/G/1 queueing system. The distribution of Z1 was com-puted by Cohen (1969) in the cases where
(H3) {C(t)} is a compound Poisson process or the claims Ci are expo-nentially distributed.
As a consequence, Csorgo and Steinebach (1991) observed that, in bothcases,
P (Z1 > z) = ce�Rz(1 +O(e�Az)) as z ! 1, (9)
with positive constants c and A. Hence, under one of the assumptions statedin (H3) condition (1) holds. In this way, the estimators of the exponentialtail coe�cient in the family (1) may be applied to the estimation of theadjustment coe�cient under (H3).
7
More recently, Brito and Freitas (2006) have shown that the consistencyproperties of the estimators still hold in the Sparre Andersen model under(H1) and (H2), since under these conditions
P (Z1 > z) = ce�Rx(1 + o(1)) as z ! 1.
On the other hand, it is clear that we need to assume stronger conditionsthan (H1) and (H2) in order to apply the result of Theorem 2 to thiscontext. In the following corollary, we establish the Edgeworth expansion forthe family (9), for an adequate choice of the sequence kn.
Corollary 1 Assume that (9) holds and kn is a sequence of positive integers
satisfying (2). Let j be a positive integer and fix a small ✏ > 0. For sequenceskn ! 1 such that kn = o
�n4A/(2(A+R)+(2R+A)j+4R✏)
�, we have
P [Rn x] = �(x) + k�1/2n p1(x)�(x) + . . .
+k�j/2n pj(x)�(x) + o
�k�j/2n
�, (10)
uniformly in x, where the function pj is a polynomial of degree at most 3j�1,odd for even j and even for odd j with coe�cients depending on moments of
the Exp(1) distribution up to order 2(j + 2).
In the case where {C(t)} is a compound Poisson claim process with ex-ponentially distributed claims, the exact d.f. of the r.v.’s Z1, Z2, . . . is givenby
F (z) =1� ae�(1�a)z/�
1� a2e�(1�a)z/�, z > 0, (11)
where a := �/↵ and ↵ := E(�T1) > E(C1) =: � (cf. Cohen (1969)). More-over,
R =↵� �
↵�.
In this case, 1 � F (z) = a(1 � a)e�Rz{1 + O(e�Rz)} as z ! 1, and sothe Edgeworth expansion (10) is valid for sequences kn ! 1 such that
kn = o⇣n
44+3j+4✏
⌘. In particular, the one-term Edgeworth expansion (j = 1)
holds for intermediate sequences kn ! 1 satisfying kn = o⇣n
47+4✏
⌘.
3 Proofs
Proof of Theorem 2.In order to simplify the presentation of the proof of Theorem 2 we will
introduce some notation. Let T1, T2, . . . and T 01, T
02, . . . be two sequences of
8
r.v’s, T = (T1, . . . , Tn) and T0 = (T 01, . . . , T
0n). We denote by S2
n(T) thesample variance of T and by Sn(T,T0) the sample covariance between T andT0, that is,
S2n(T) =
1
n
nX
i=1
(Ti �1
n
nX
i=1
Ti)2
and
Sn(T,T0) =1
n
nX
i=1
(Ti �
1
n
nX
i=1
Ti)(T0i �
1
n
nX
i=1
T 0i )
!.
We define also, for i = 1, . . . , kn,
Ei := � 1
Rlog Yi, Fi := log
eL (Yi (1� Un�kn,n))eL (1� Un�kn,n)
,
where Yi is defined as in (4), and writeE = (E1, . . . , Ekn) and F = (F1, . . . , Fkn).With these notations and using the representation (3) in the definition of
the estimator eR(kn), we may write
1eR2(kn)
� 1
R2= S2
kn(E) + S2kn(F) + 2Skn(E,F)�
1
R2
and thus
Rn =k1/2np8(R2S2
kn(E)� 1) +R2
p8k1/2n (S2
kn(F) + 2Skn(E,F)).
Note that R2S2kn(E) = S2
kn(RE), which is the sample variance of a unit
exponential i.i.d. kn-sample.By Theorem 2.2 of Hall (1992) we have that (see also Hall (1992), Section
2.6), for j � 1,
P [k1/2n (R2S2
kn(E)� 1)/p8 x] = �(x) + k�1/2
n p1(x)�(x) + . . .
+k�j/2n pj(x)�(x) + o
�k�j/2n
�, (12)
uniformly in x, where the functions pj are polynomials of degree at most3j � 1, odd for even j and even for odd j, with coe�cients depending onmoments of the Exp(1) distribution up to order 2(j + 2).
Now, we are going to study the term R2p8k1/2n (S2
kn(F) + 2Skn(E,F)).
The well-known inequality
(Skn(E,F))2 S2
kn(E)S2kn(F),
implies that|Skn(E,F)| (1/R + oP (1))Skn(F).
9
By a routine calculation, it is easily verified that
S2kn(F) sup
Ykny1
log
eL (y (1� Un�kn,n))eL (kn/n)
!2
.
Thus, if we suppose that (5) holds for some ✏ > 0, then we have that
P
R2
p8k1/2n S2
kn(F) > k�j/2�✏n
�= o(k�j/2
n ) (13)
and
P
R2
p8k1/2n 2Skn(E,F) > k�j/2�✏
n
�= o(k�j/2
n ). (14)
For simplifying the presentation we write
Rn = Pn +�1 +�2,
where
Pn =k1/2np8(R2S2
kn(E)� 1),
�1 =R2
p8k1/2n S2
kn(F)
and
�2 =R2
p8k1/2n 2Skn(E,F).
Now, using the delta method and relations (13) and (14), we obtain
P [Rn x] = P [Pn +�1 +�2 x]
P⇥Pn +�1 x+ k�j/2�✏
n
⇤+ P
⇥|�2| > k�j/2�✏
n
⇤
P⇥Pn x+ 2k�j/2�✏
n
⇤+ P
⇥|�1| > k�j/2�✏
n
⇤+ o
�k�j/2n
�
= P [Pn x] + o�k�j/2n
�.
Similarly,
P [Rn x] � P [Pn x] + o�k�j/2n
�.
Thus, the results follows from (12).⇤
Proof of Corollary 1.For the family (9), we have that
eL(s) = c1/R{1 +O(sA/R)}, 0 < s < 1.
10
Let 0 < � < 1 such that |s| < � )���O(sA/R)
sA/R
��� < k, for some k 2 R+.
Consider the event An(�) = {1� Un�kn,n < �} . On An(�), we have, for nsu�ciently large and for 0 < y 1,
eL(y(1� Un�kn,n))eL�knn
� 1 + b1(y(1� Un�kn,n))A/R + b2
✓knn
◆A/R
, b1, b2 2 R+.
So,�����log
eL(y(1� Un�kn,n))eL�knn
�
����� b1(y(1� Un�kn,n))A/R + b2
✓knn
◆A/R
,
and consequently,
supYkny1
�����logeL(y(1� Un�kn,n))
eL�knn
�
����� b1(1� Un�kn,n)A/R + b2
✓knn
◆A/R
.
Now, fix a small ✏ > 0. On An(�), we then have, for a > 0 and nsu�ciently large,
P⇥V (kn) > ak�j/2�✏
n
⇤= P
"sup
Ykny1
�����logeL(y(1� Un�kn,n))
eL�knn
�
����� > ak�(j+1)/2�✏n
#
P
"b1(1� Un�kn,n)
A/R + b2
✓knn
◆A/R
> ak�(j+1)/2�✏n
#
= P
"(1� Un�kn,n)
A/R >a
b1k�(j+1)/2�✏n � b2
b1
✓knn
◆A/R#
= P
2
4Ukn+1,n >
a
b1k�(j+1)/2�✏n � b2
b1
✓knn
◆A/R!R/A
3
5
P
2
4����Ukn+1,n �
kn + 1
n+ 1
���� >
a
b1k�(j+1)/2�✏n � b2
b1
✓knn
◆A/R!R/A
� kn + 1
n+ 1
3
5
E(Ukn+1,n � kn+1
n+1 )2
⇣ab1k�(j+1)/2�✏n � b2
b1
�knn
�A/R⌘R/A
� knn
�2 .
Recalling that E(Ukn,n) =knn+1 and V (Ukn,n) = O(kn
n2 ), we conclude that
P⇥V (kn) > ak�j/2�✏
n
⇤=
O�knn2
�
O⇣(k�(j+1)/2�✏
n )2RA
⌘ = O
k1+ 2R
A ((j+1)/2+✏)n
n2
!.
11
Note now that 1� P [An(�)] = O�knn2
�since, for n su�ciently large,
1� P [An(�)] = P⇥Ukn+1,n � kn+1
n+1 � � � kn+1n+1
⇤ E(Ukn+1,n � E(Ukn+1,n))2�
� � knn
�2 .
So,
P⇥V (kn) > ak�j/2�✏
n
⇤= O
k1+ 2R
A ((j+1)/2+✏)n
n2
!.
Thus, if kn ! 1 is such that k1+2R
A ((j+1)/2+✏)n
n2 = o(k�j/2n ), which is equivalent
to kn = o⇣n
4A2(A+R)+(2R+A)j+4R✏
⌘, the condition (5) of Theorem 2 holds and the
result follows.⇤
4 Confidence bounds
Suppose that the d.f. F satisfies (1) and (2) holds.If we knew the d.f. of Rn, we could calculate the corresponding q-quantile
xq, i.e P (Rn xq) = q. The lower 100p% confidence intervals for R wouldbe:
CI eR(kn, p) = {R : Rn > xq}
=
✓eR(kn)
⇣1 +
p8xqk
�1/2n
⌘1/2,+1
◆
=⇣eR(kn)
⇣1 +
p2xqk
�1/2n + o(k�1/2
n )⌘,+1
⌘,
where p = 1� q.For estimating xq we may use, as usual, the asymptotic normality of
Rn. P (Rn xq) = q may be written as �(xq) + o(1) = q, and so xq =��1(q) + o(1), where ��1(q) denotes the q-quantile of the N(0, 1). We thenmay construct the asymptotic confidence bounds
NI eR(kn, p) =⇣eR(kn)
⇣1 +
p2��1(q)k�1/2
n + o(k�1/2n )
⌘,+1
⌘.
Next, we are going to use the asymptotic expansion obtained in Theorem2 to get more accurate interval estimates for R. For this we suppose thatcondition (5) holds for j = 1 and we consider the polynomial derived in (8):
p1(x) =p28 (11� 9x2). (The following procedure can be extended for higher
orders.) In the case j = 1, P (Rn xq) = q may be written as
�(xq) + k�1/2n p1(xq)�(xq) + o(k�1/2
n ) = q.
12
Solving this equation for xq, we have
xq = ��1(q)� p1(��1(q))k�1/2
n + o(k�1/2n )
(see e.g. Hall (1992), Section 2.5).Then, we obtain the following confidence bounds
EI eR(kn, p) =⇣eR(kn)
⇣1 +
p2��1(q)k�1/2
n �p2p1(�
�1(q))k�1n + o(k�1
n )⌘,+1
⌘.
The performance of these bounds will be investigated in Section 5.We will include in our study the so-called percentile confidence bounds
given by:
BI eR(kn, p) =⇣eR(kn)
⇣1 +
p2bxqk
�1/2n + o(k�1/2
n )⌘,+1
⌘,
where bxq is the q-quantile of the tail bootstrap d.f. of Rn (see e.g. Brito andFreitas (2006)).
5 Simulation results
In this section we present a small simulation study, in order to examine thefinite sample behaviour of the confidence intervals EI eR(kn, p) constructed inSection 4.
For the sake of comparison with previous simulation studies, we considerthe family defined by (11) with (↵, �) = (24000, 10000).
We also include in the study, the confidence bounds based on the asymp-totic normality of the geometric-type estimator eR(kn), NI eR(kn, p), and onthe tail bootstrap approximation, BI eR(kn, p).
We take here n = 500 and n = 1000. For a very limited illustration ofthe influence of the choice of kn on the coverage accuracy, we present herethree values of kn for each n. The empirical coverage rates for 95% and 99%confidence bounds are reported in Tables 1 (n = 500) and 2 (n = 1000).The confidence bounds were constructed from 1000 samples for each n. Eachbootstrap bound was computed using 2500 bootstrap samples.
We start by noting that all the bootstrap confidence intervals show un-dercoverage.
We may observe that the confidence bounds based on the first orderEdgeworth expansion are always more accurate than the ones based on thestandard normal distribution, which are too conservative.
We may conclude that, in general, for both values of n and p, the coveragerates present a better behaviour using the first order Edgeworth expansion.
13
kn NI eR(kn, p) EI eR(kn, p) BI eR(kn, p)30 0.993 0.967 0.94550 0.976 0.955 0.94070 0.973 0.948 0.938
Table 1: Empirical coverage rates of the confidence bounds, n = 500, p =0.95.
kn NI eR(kn, p) EI eR(kn, p) BI eR(kn, p)30 1.000 0.994 0.98350 1.000 0.992 0.98170 0.999 0.988 0.980
Table 2: Empirical coverage rates of the confidence bounds, n = 500, p =0.99.
kn NI eR(kn, p) EI eR(kn, p) BI eR(kn, p)50 0.977 0.958 0.94580 0.972 0.951 0.942110 0.964 0.949 0.940
Table 3: Empirical coverage rates of the confidence bounds, n = 1000, p =0.95.
kn NI eR(kn, p) EI eR(kn, p) BI eR(kn, p)50 0.999 0.994 0.98480 0.998 0.990 0.984110 0.997 0.988 0.982
Table 4: Empirical coverage rates of the confidence bounds, n = 1000, p =0.99.
Acknowledgements
Both authors acknowledge support from CMUP, financed by FCT (Portu-gal) through the programs POCTI and POSI, with national and EuropeanCommunity structural funds.
14
References
Bacro, J.N. and Brito, M. (1995). Weak limiting behaviour of a simple tailPareto-index estimator. Journal of Statistical Planning and Inference 45, 7–19.Bacro, J.N. and Brito, M. (1998). A tail bootstrap procedure for estimatingthe tail Pareto index. Journal of Statistical Planning and Inference 71, 245-260.Brito, M. and Freitas, A.C.M. (2003). Limiting behaviour of a geometric-type estimator for tail indices. Insurance: Mathematics and Economics 33,211-226.Brito, M. and Freitas, A.C.M. (2006). Weak convergence of a bootstrapgeometric-type estimator with applications to risk theory. Insurance: Math-
ematics and Economics 38, 571-584.Cheng, S. and Pan, J. (1998). Asymptotic expansions of estimators for thetail index with applications. Scand. J. Statist , 25, 717-728.Cohen, J. W. (1969). The single server queue. North-Holland, Amsterdam.Csorgo, M. and Steinebach, J. (1991). On the estimation of the adjustmentcoe�cient in risk theory via intermediate order statistics. Insurance: Math-
ematics and Economics 10, 37-50.Csorgo, S. and Viharos, L. (1998). Estimating the tail index. In: Asymptotic
Methods in Probability and Statistics , (B. Szyszkowicz, ed), North Holland,Amesterdam, 833-881.Dekkers, A. L. M., Einmahl, J.H.J. and de Haan, L. (1989). A moment es-timator for the index of an extreme-value distribution. Ann. Statist. 17,1833-1855.Hall, P. (1992). The bootstrap and Edgeworth expansion. Springer- Verlag,New York.Haeusler, E. and Segers, J. (2007). Assessing confidence intervals for thetail index by Edgeworth expansions for the Hill estimator. Bernoulli 13,175-194.Pitts, S.M., Grubel, R. and Embrechts, P. (1996). Confidence bounds for theadjustment coe�cient. Advances in Applied Probability 28, 802-827.Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. (1999). Stochastic pro-
cesses for insurance and finance. Wiley, New York.Schultze, J. and Steinebach, J. (1996). On least squares estimates of anexponential tail coe�cient. Statistics & Decisions 14, 353-372.
15