Random sums of random variables and vectors by E. Omey and R. Vesilo

download Random sums of random variables and vectors by E. Omey and R. Vesilo

of 28

Transcript of Random sums of random variables and vectors by E. Omey and R. Vesilo

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    1/28

    Random sums of random variables and vectorsVersion May 6, 2009

    E. Omey (*) and R. Vasilo (**)

    (*) HUB, Stormstraat 2, 1000 Brussels - [email protected]

    (**) Macquarie University, Sydney - [email protected]

    Abstract

    Let fX; Xi; i = 1; 2;:::g denote independent positive random variables hav-ing a common distribution function F(x) and, independent of X, let N denote

    an integer valued random variable. Using S(0) = 0 and S(n) = S(n 1) + Xn,the random sum S(N) has distribution function

    G(x) =1X

    i=0

    P(N = i)P(S(i) x).

    and tail distribution G(x) = 1 G(x). The distribution function G is calledsubordinated to F with subordinator N. Under suitable conditions, it can beproved that G(x) s E(N)F(x) as x ! 1. here many results. In this paper weextend some of the existing results. In the place of i.id. random variables, weuse variables that are independentor variables that are asymptotic independent.We also consider multivariate subordinated distribution functions.

    Keywords: Subexponential distributions, regular variation, O-regular vari-ation, subordination

    AMS 2000 Subject Classication:Primary: 60G50Secondary: 60F10, 60E15, 26A12, 60K99

    1 Introduction

    Let fX; Xi; i = 1; 2;:::g denote independent, nonnegative random variables (r.vs)having a common distribution function (d.f.) F(x). Independent of X, let Ndenote an integer valued r.v. with pdf pn = P(N = n). Partial sums are givenby S(0) = 0 and S(n) = X1 + X2 + ::: + Xn, n 1. For n 1, the d.f. ofS(n) is

    given by P(S(n) x) = Fn(x), where Fn(x) denotes the n-fold convolutionof F with itself. Replacing the index n by the random index N, we obtain therandom sum S(N) =

    PNi=0 Xi . The d.f. ofS(N) is given by

    G(x) =1X

    n=0

    pnFn(x).

    1

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    2/28

    The tail distribution is given by

    G(x) = 1 G(x) =1X

    n=1

    pnFn(x).

    If p0 = 0 and F has a density f, then G also has a density g given by

    g(x) =1X

    n=1

    pnfn(x),

    where fn(x) = f f::: f(x) and where a b(x) =Rx0 a(x y)b(y)dy. Many

    papers have been devoted to the asymptotic behaviour of the tail G(x) and ofthe density g(x).

    To formulate our results, we recall some of the basic denitions. A positiveand measurable real function g(x) is regularly varying with real index if

    limx!1

    g(tx)

    g(x)= t, 8t > 0.

    Notation: g 2 RV(). The function g(x) is in the class L if it satises

    limx!1

    g(t + x)

    g(x)= 1, 8t > 0.

    The function g(x) is in the class ORV of O-regularly varying functions if

    lim supx!1

    g(tx)

    g(x)= g(t) < 1, 8t > 0.

    For a survey of denitions, properties and applications ofRV and ORV, we referto Bingham et al. (1987), Geluk and de Haan (1987), Seneta (1976), Resnick(1987).

    A d.f. F is in the class S of subexponential distributions if

    limx!1

    1 F2(x)

    1 F(x)= 2.

    Notation F 2 S. A density function f(x) is in the class SD of subexponentialdensities if it satises

    limx!1

    f2(x)

    f(x)= 2.

    Notation f 2 SD. It is well known that ifF(x) 2 L \ ORV, then F 2 S andif f(x) 2 L \ ORV, then f 2 SD. The class S was introduced by Chistyakov(1964), Teugels (1975) and studied by Chover et al (1973a, 1973b), Cline (1987),Embrechts et al. (1979, 1980, 1982, 1985, 1997).

    If F 2 S, it is well known that

    liminf1 Fn(x)

    1 F(x) n, as x ! 1.

    2

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    3/28

    An application of Fatous lemma yields the result that

    liminf 1 G(x)1 F(x)

    E(N), as x ! 1.

    If E(N) < 1, it makes sense to look for an upper bound for the ratioG(x)=F(x). The following result is well known, see Embrechts et. al. (1979,1982), Chover et al. (1973a,b), Stam (1973), Shneer (2004), or Daley et al.(2006).

    Proposition 1 (1) (a) Suppose that E((1 + ")N) < 1 for some " > 0.(i) If F 2 S, then G 2 S and G(x) s E(N)F(x).(ii) If f 2 SD, then g 2 SD and g(x) s E(N)f(x).(b) If F 2 RV(), > 1 and if E(N+1+") < 1, then G 2 RV() and

    G(x) s E(N)F(x).(c) If X 0 is an - stable (0 < < 1) r.v., then Fn(x) nF(x) and

    G(x) E(N)F(x).

    Result (b) shows that if we have weaker assumptions on N, we have toassume more about F.

    The main contributions of the paper are to extend Proposition 1 in a numberofways. In Section 2 we discuss the case in which we assume that the r.vs Xiare independent and not necessarily i.i.d.. Secondly, we consider Proposition 1(b) in the case where the mean = E(X) = 1. Also we formulate some newresults in the case where the Xi are dependent and asymptotically independent.In Section 3 of the paper, we state and prove a bivariate analogue of Proposition1.

    In the results below, limits are always limits as x ! 1 or t ! 1 or

    min(x; y) ! 1. The notation a(x)t

    b(x) means that ua(x) b(x) vb(x)for some u ; v > 0 and all x x. The notation a(x) s b(x) means thata(x)=b(x) ! 1. We use similar notations for bivariate functions.

    2 Univariate results

    Our aim here is to discuss the tail distributions P(S(n) > x) and G(x) Moreprecisely, we want to obtain universal inequalities, asymptotic inequalities andasymptotic equalities. If the Xi are i.i.d. with a nite mean , many results areknown. In the literature, much less is known in the innite means case.

    2.1 Upper Bounds

    For further use, we dene the integrated tail mF(x) as mF(x) = Rx0 F(t)dt.Clearly, we have xF(x) mF(x). Recall that the Laplace-Stieltjes Transform(LST) of K(x) is given by

    bK(s) = Z10

    exp(sx)dK(x).

    3

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    4/28

    Clearly the LST of F(x) = P(X x) is given by f(s)

    bF(s) = E(exp(sX)).

    Moreover, we have bmF(s) = (1 f(s))=s:Now suppose that Xi has d.f. Fi(x), LST fi(s) and integrated tail mi(x).For the sum S(n), n 1, we set

    mS(n)(x) =

    Zx0

    P(S(n) > z )dz.

    The following result slightly extends Lemma 5(i) of Daley et al (2007).

    Lemma 2 (2) (i) We have mS(n)(x) Pn

    i=1 mi(x) and P(S(n) > x) Pni=1 mi(x)=x.

    (ii) In the i.i.d. case, we have mS(n)(x) nmF(x) and P(S(n) > x) nmF(x)=x.

    Proof. The LST of mS(n)(x) is given by

    bmS(n)(s) = 1 f1(s)f2(s):::fn(s)s .Repeated use of the equality 1 ab = (1 a)b + (1 b) shows that

    bmS(n)(s) = nXi=1

    bmi(s)ai(s),where for each i, ai(s) is the product of one or more of the fj(s). So we ndthat

    mS(n)(x) =n

    Xi=1mi Ai(x),

    where Ai(x) is the convolution product of one or more of the Fj (x). It followsthat

    mS(n)(x) nX

    i=1

    mi(x).

    Using the inequality xP(S(n) > x) mS(n)(x), we have (i). The second resultfollows from the rst result.

    Remark. This Lemma makes only sense if the means i = E(Xi) areinnite.

    2.2 Lower Bounds

    In general, it is hard to obtain lower bounds valid for all x > 0. We prove thefollowing liminf-result.

    4

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    5/28

    Lemma 3 (3) (i) For n = 1; 2;::: we have

    liminf P(S(n) > x)Pni=1 Fi(x)

    1. (1)

    (ii) In the i.i.d. case, we have

    liminfP(S(n) > x)

    F(x) n. (2)

    Proof. (i) We have

    P(S(n) > x) = 1

    Zx0

    P(S(n 1) x z)dFn(z)

    = Zx

    0

    P(S(n 1) > x z)dFn(z) + (1 Fn(x))

    P(S(n 1) > x)Fn(x) + Fn(x).

    It follows that

    P(S(n) > x) P(S(n 1) > x) + Fn(x) P(S(n 1) > x)Fn(x).

    Using similar arguments for S(n 1); S(n 2);:::, we obtain that

    P(S(n) > x) P(S(n 2) > x) + Fn1(x) + Fn(x)

    P(S(n 2) > x)Fn1(x) P(S(n 1) > x)Fn(x)

    and then

    P(S(n) > x) F1(x) + ::: + Fn(x)

    n1Xi=1

    P(S(i) > x)Fi+1(x).

    Using

    0 P(S(i) > x)Fi+1(x)

    F1(x) + ::: + Fn(x) P(S(i) > x),

    we obtain that

    P(S(n) > x)

    F1(x) + ::: + Fn(x) 1

    n1Xi=1

    P(S(i) > x),

    and (1) follows. The second result follows from the rst result.

    2.3 Subordination in the i.i.d. case

    For the subordinated tail, in the i.i.d. case, we obtain the following result.

    Lemma 4 (4) (i) We have xG(x) mG(x) E(N)mF(x).(ii) If = E(X) = 1, then mG(x) s E(N)mF(x).

    5

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    6/28

    Proof. Daley et al. (2007, Lemma 5)Note that we have xG(x) E(N)mF(x) without any extra conditions. In

    the innite means case this is a new result.To relate mF(x) and F(x), we need an extra assumption about F(x).

    Lemma 5 (5) (i) For 0 < 1, we have F(x) 2 RV() if and only ifmF(x) 2 RV(1 ) and both statements imply that mF(x) s xF(x)=(1 ).

    (ii) If F(x) 2 OR with (F) > 1, then mF(x) t xF(x)

    Proof. (i) This is a standard result in regular variation theory, (e.g. Binghamet al., Theorem 1.6.4).

    (ii) This is a standard result in the ORV-theory, (e.g. Bingham et al. Corol-lary 2.6.2.

    The main result of this section is the following.

    Theorem 6 (6) (i) For 0 < 1, we have F(x) 2 RV() if and only if

    G(x) 2 RV(), and both statements imply that G(x)s

    E(N)F(x).(ii) F(x) 2 OR with (F) > 1 if and only if G(x) 2 OR with (G) > 1,and both statements imply that G(x) t F(x).

    Proof. (i) Since (cf. Lemma 4 (ii)) mG(x) s E(N)mF(x), the result followsfrom Lemma 5 (i).

    (ii) First assume that F(x) 2 OR with (F) > 1. Using mF(x) t xF(x),we obtain that mG(x) t xF(x) and hence

    xG(x)

    xF(x)

    mG(x)

    xF(x) A.

    Using liminfG(x)=F(x) E(N), we nd that G(x) t F(x). But then it followsthat G(x) 2 OR with (G) > 1.

    Now assume that G(x) 2 OR with (G) > 1. Using mF(x) t mG(x) txG(x), we can nd constants 0 < a < b and x so that

    axG(x) mF(x) bxG(x); x x.

    Now take t > 1 and observe on the one hand that

    mF(xt) mF(x) =

    Zxtx

    F(z)dz F(x)x(t 1).

    On the other hand, we have mF(xt) mF(x) axtG(xt) bxG(x) and we getthat

    F(x)(t 1) atG(xt) bG(x)

    orF(x)

    G(x)

    (t 1) atG(xt)

    G(x)

    b.

    Because G(x) 2 OR with (G) > 1, for t suciently large we nd that

    liminfF(x)

    G(x)(t 1) > 0.

    Since we always have the lim sup-result, the lemma follows.

    6

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    7/28

    2.4 Subordination in the independent case

    If we have only independent components, we proceed in a dierent way. In thenext results we have to make extra assumptions about the asymptotic behaviourof the tails Fi. The assumptions that we use are similar to those made inSkucait (2004) and Maejima (1972). We have the following result.

    Lemma 7 (7) Suppose that for i 1 we have

    liminfFi(x)

    (x) d(i).

    Then

    liminfG(x)

    (x) E(D(N)),

    where D(n) = Pn1 d(i).Proof. First note that

    G(x) =1X

    n=1

    pnP(S(n) > x)Pn

    i=1 Fi(x)

    nXi=1

    Fi(x).

    Under the assumptions of the Lemma, for each n 1, we have that

    liminf

    Pni=1 Fi(x)

    (x) D(n),

    where D(n) = Pn1 d(i). Using Fatou s lemma and (1), we obtain the desiredresult.To nd also an upper bound, we use stronger assumptions and proceed as

    follows.

    Lemma 8 (8) Suppose that for each n 1 we have Fn(x)=(x) ! d(n) 0,and(x) =

    Rx0 (z)dz " 1 as x " 1. Also suppose that for some constant c 0

    we havenX

    i=1

    mi(x) (x)A(n); 8x c,

    and E(A(N)) < 1. Then supxc mG(x)=(x) E(A(N)), and mG(x) s(x)E(D(N)), where D(n) =

    Pn1 d(i).

    Proof. Using Fi(x)=(x) ! d(i) 0 and (x) ! 1, we have mi(x)=(x) !d(i). From here it follows that

    nXi=1

    Fi(x)=(x) ! D(n),

    7

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    8/28

    andn

    Xi=1

    mi(x)=(x) ! D(n).

    Using Lemma 2 (i), we have

    mG(x)

    (x)=

    1X1

    pnmS(n)(x)

    (x)

    1X1

    pn

    Pni=1 mi(x)

    (x),

    and it follows that

    supxc

    mG(x)

    (x)

    1X1

    pnA(n) = E(A(N)).

    For the second result, ifE(D(N)) < 1, there is nothing to prove. So we assumethat E(D(N)) < 1. To prove the second result, we use Pratts extension ofFatous lemma, cf. Pratt (1960), Johns (1957). Observe the following facts. Wehave

    (1)Pn

    i=1 mi(x)=(x) ! D(n), as x ! 1;(2) 0 A(n)

    Pni=1 mi(x)=(x), for x > c;

    (3) A(n) Pn

    i=1 mi(x)=(x) ! A(n) D(n), as x ! 1.Using Fatous lemma, we obtain that

    liminf1X

    n=1

    p(n)

    "A(n)

    nXi=1

    mi(x)=(x)

    #

    Xp(n) [A(n) D(n)] = E(A(N)) E(D(N)).

    Since P1

    n=1p(n)A(n) = E(A(N)), we nd that

    limsupX

    p(n)nX

    i=1

    mi(x)=(x) E(D(N)).

    We conclude that

    lim supmG(x)

    (x) E(D(N)).

    On the other hand, using

    1

    (x)P(S(n) > x) =

    P(S(n) > x)Pni=1 Fi(x)

    Pni=1 Fi(x)

    (x),

    and (1), we get that

    liminf1

    (x)P(S(n) > x) D(n).

    It follows that

    liminfG(x)

    (x) E(D(N)).

    8

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    9/28

    Using Fatous lemma again, we obtain that

    liminf mG(x)(x)

    E(D(N))

    and we conclude that mG(x) s E(D(N))(x).As before, we need an extra condition on (x) or (x), to get a result for

    G(x), cf. Lemma 5.

    2.4.1 Example 1

    Take Fi(x) = F(x) = (x), for all i 1. Here we have d(i) = 1 and D(n) = n.

    2.4.2 Example 2

    Let a(i) > 0, h(i) > 0 and Fi(x) = Fh(i)(a(i)x). As before, we use the integrated

    tails and for i 1, we set mi(x) = mFi(x) where

    mi(x) =

    Zx0

    (1 Fh(i)(a(i)z))dz =1

    a(i)

    Za(i)x0

    (1 Fh(i)(z))dz.

    Clearly we have 1 Fh(i)(z) h(i)(1 F(z)), and then we obtain that

    mi(x) h(i)

    a(i)mF(a(i)x); i 1,

    andn

    Xi=1 m

    i(x)

    n

    Xi=1

    h(i)

    a(i) mF(a(i)x).

    Multiplying by pn = P(N = n) and taking sums, we obtain the universal boundthat

    mG(x) E(NX

    i=1

    h(i)

    a(i)mF(a(i)x)).

    2.4.3 Example 3

    Take Example 2 again and assume that F(x) 2 RV(), 0 < < 1. For xedi we have, as x ! 1,

    1 Fh(i)(a(i)x) s h(i)(1 F(a(i)x) s h(i)a(i)F(x).

    Hence, we obtain that mi(x) s h(i)a(i)mF(x), so that

    nXi=1

    mi(x) s D(n)mF(x),

    9

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    10/28

    where D(n) =

    Pni=1 h(i)a

    (i). On the other hand, we have

    Fi(x)F(x)

    h(i) F(a(i)x)F(x)

    .

    Since F(x) 2 RV(), Potters bounds give

    F(a(i)x)

    F(x) A(a(i))+; a(i) 1; x x,

    andF(a(i)x)

    F(x) B(a(i)); a(i) 1; a(i)x x; x x.

    For the remaining case of a(i) 1; a(i)x x and x x, we have

    F(a(i)x)F(x)

    1F(x)

    .

    Also, since x x=a(i), we have F(x=a(i)) F(x). Since 1=a(i) 1, theusual bounds gives

    B(1

    a(i))

    F(x=a(i))

    F(x) A(

    1

    a(i))+,

    and then it follows that

    1

    F(x)

    1

    F(x=a(i))

    (a(i))

    BF(x)= C(a(i)).

    We conclude that

    Fi(x)

    F(x) Ah(i)(a(i))+; a(i) 1; x x,

    andFi(x)

    F(x) Dh(i)(a(i)); a(i) 1; x x.

    It follows that

    Fi(x)

    F(x) Ch(i) max((a(i))+; (a(i))); x x.

    This implies that

    supxx

    Pni=1 Fi(x)F(x)

    A(n),

    where

    A(n) =nX

    i=1

    h(i)max(a(i))+; (a(i))).

    10

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    11/28

    For G(x) we nd that

    supxx

    G(x)

    F(x) E(A(N)).

    A lower bound is easy to nd. We have

    Fi(x)

    F(x)! h(i)a(i).

    Using Fatous lemma, we obtain that

    liminfG(x)

    F(x)X

    pih(i)a(i).

    Now we proceed to nd bounds for the mfunctions. We have

    mi(x) h(i)a(i)

    mF(a(i)x); i 1.

    In the regularly varying case F(x) 2 RV(), 0 < < 1, we have mF(x) 2RV(1 ). Potters bounds show that

    mF(a(i)x)

    mF(x) Aa1+"(i), for x x; a(i) 1,

    mF(a(i)x)

    mF(x) 1, for x x; a(i) 1.

    But then

    supxx

    Pni=1 mi(x)mF(x)

    = supxx

    nXi=1

    h(i)a(i)

    mF(a(i)x)mF(x)

    A(n)

    where

    A(n) = AnX

    i=1;a(i)1

    h(i)a+"(i) +nX

    i=1;a(i)1

    h(i)

    a(i).

    If E(A(N)) < 1, we can proceed as before, and we nd that

    mG(x)

    mF(x)! E(D(N)).

    In this example, we also have that mF(x) s xF(x)=(1). Using the monotone

    density theorem, we nd that

    G(x) s E(D(N))F(x).

    Special cases are h(i) = 1 and h(i) = i.

    11

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    12/28

    2.4.4 Example 4

    Let a(i) > 0 and Fi(x) = Fi

    (a(i)x) and assume that F 2 RV(), 0 < < 1.First note that1 Fi(x) i(1 F(a(i)x=i).

    As in Example 3, it follows that

    Fi(x)

    F(x) Ah(i)(

    a(i)

    i)+; a(i)=i 1; x x,

    andFi(x)

    F(x) Dh(i)(

    a(i)

    i); a(i)=i 1; x x.

    It follows that

    Fi(x)

    F(x) Ch(i) max((

    a(i)i

    )+; (a(i)

    i)); x x.

    For mi(x) we nd that

    mi(x) =

    Zx0

    (1 Fi(a(i)z)dz i

    a(i)mF(a(i)x).

    Example 4 can be completed as in Example 3.

    2.5 Subordination in a dependent case

    In this section we assume that the Xi are dependent, but asymptotically inde-

    pendent. We assume that for each i 6= j we have

    P(Xi > x; Xj > x)

    Fi(x) + Fj (x)! 0. (3)

    In the next result we formulate additional assumptions to prove that P(S(n) >x) asymptotically equals

    Pn1 Fi(x).

    Proposition 9 (9) Assume that (3) holds and that for each i, Fi(x) 2 ORVwith i(t) ! 1 as t " 1. Then

    P(S(n) > x) sn

    X1Fi(x).

    Proof. Part 1. First we prove the result for n = 2. Choose , 0 < < 1 andwrite

    P(X1 + X2 > x) = I+ II II I+ IV

    12

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    13/28

    where

    I = P(X1 + X2 > x; X2 > (1 )x),II = P(X1 + X2 > x; X1 > (1 )x),

    II I = P(X1 > (1 )x; X2 > (1 )x),

    IV = P(X1 + X2 > x; x X1 (1 )x;x X2 (1 )x).

    For I, (and similarly for II) we have F2(x) I F2((1 )x), and then itfollows that

    1 lim

    sup

    inf

    I

    F2(x) 2(1 ).

    For II I, we write

    II I =II I

    II I(a) II I(a),

    where II I(a) = P(X1 > (1 )z) + P(X2 > (1 )z). Now observe that

    P(X1 > (1 )x) =F1((1 )x)

    F1(x)F1(x)

    and similarly for P(X2 > (1 )z). It follows that

    II I(a) max(F1((1 )x)

    F1(x);

    F2((1 )x)

    F2(x))(F1(x) + F2(x))

    and hence also that

    limsupII I(a)

    F1(x) + F2(x)

    max(1(1 ); 2(1 ))

    Using (3) we obtain that

    II I

    F1(x) + F2(x)=

    II I

    II I(a)

    II I(a)

    F1(x) + F2(x)! 0.

    Now we investigate IV . We have IV P(X1 > x; X2 > x). As in the caseof II I, we get that

    IV

    F1(x) + F2(x)! 0.

    Combining all terms, we nd that

    1 limsupinfP(X1 + X2 > x)F1(x) + F2(x) max(A(1 ); B(1 )).Now we take # 0, to obtain that

    P(S(2) > x)

    F1(x) + F2(x)=

    P(X1 + X2 > x)

    F1(x) + F2(x)! 1.

    13

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    14/28

    This proves the result for n = 2.Part 2. Assume that the result holds for S(2), S(3), ..., S(n 1). to prove

    the result for S(n) we consider S(n 1) and Xn. By the induction hypothesiswe have P(S(n 1) > x) s

    Pn1i=1 Fi(t). It is straightforward to prove that

    limsupP(S(n 1) > tx)

    P(S(n 1) > x)= n1(t) max(1(t); 2(t);:::;n1(t)).

    This shows that P(S(n 1) > x) 2 ORV and n1(t) ! 1 as t " 1.Now we prove that S(n 1) and Xn are asymptotically independent. Using

    fX1 x=(n 1); X2 x=(n 1);:::;Xn1 x=(n 1)g fS(n 1) xg ,

    it follows that

    P(S(n 1) > x; X n > x) P(S(n 1) > x; X n > x=(n 1))

    n1Xi=1

    P(Xi > x=(n 1); Xn > x=(n 1))

    Using (3), it follows that P(S(n 1) > x; Xn > x) = o(1)(Pn

    i=1 Fi(x)) or that

    P(S(n 1) > x; Xn > x) = o(1)(P(S(n 1) > x) + Fn(x)).

    We can proceed as in Part 1 to prove that P(S(n) > x) s P(S(n 1) >x) + Fn(x). This proves the result.

    Now let N denote an integer variable, independent of all the Xi aqnd considerthe random sum S(N). We have

    P(S(N) > x) =1X

    i=1

    pnP(S(n) > x)

    We prove the following result.

    Theorem 10 (10) Assume that the conditions of Proposition 9 hold.(i) If there is a d.f. F(x) such that

    liminfFi(x)

    F(x) d(i),

    then

    liminfP(S(N) > x)

    (x) E(D(N)),

    where D(n) =Pn

    i=1 di.

    14

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    15/28

    (ii) If there is a d.f. F(x) such that F(x) 2 RV() and such that

    supx0

    Fi(x)F(x)

    a(i)

    andFi(x)

    F(x)! d(i).

    IfE(N+"A(N)) < 1, whereA(n) =Pn

    i=1 a(i), thenP(S(N) > x) s E(D(N))F(x).

    Proof. (i) Under the assumptions of the theorem we get that

    liminf

    Pni=1 Fi(x)

    (x) D(n)

    where D(n) = Pni=1 d(i). Fatous lemma yields thatliminf

    P(S(N) > x)

    (x) E(D(N)).

    (ii) Using P(S(n) > x) Pn

    i=1 Fi(x=n), for x 0, we have

    1

    F(x)P(S(n) > x)

    F(x=n)

    F(x)

    nXi=1

    Fi(x=n)

    F(x=n)

    F(x=n)

    F(x)A(n),

    where A(n) =Pn

    i=1 a(i). Using F(x) 2 RV(), we obtain that

    F(x=n)

    F(x)

    Cn+", for x=n x and x x.

    For x=n x and x x, we have x nx and F(nx) F(x). Aso we have(x=n) bounded. We nd that

    F(x=n)

    F(x)

    1

    F(nx).

    For small n 1, this is bounded. For large n, we can use F(x) 2 RV() tond that

    n+"F(n) ! 1, as n ! 1.

    We nd that

    F(x=n)

    F(x) Cn+"

    , for x=n x

    and x; n x

    .

    As a conclusion, we have that

    F(x=n)

    F(x) Cn+", for n 1 and x x,

    15

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    16/28

    and1

    F(x)P(S(n) > x) Cn+"A(n), for n 1 and x x.

    We can use Lebesgues theorem to nd that .

    P(S(N) > x)

    F(x)! E(D(N)).

    This proves the result.

    3 Multivariate case

    3.1 Introduction and notation

    For convenience and without loss of generality we only discuss the two-dimensionalcase. Let F(x; y) = P(X x; Y y) denote a bivariate d.f. with marginalsF1(x) = FX (x) and F2(x) = FY (x) and suppose that X 0, Y 0. Partial

    sums will be denoted by (S(1)n ; S(2)n ) and we use the notation Fn(x; y) = 1

    Fn(x; y), where Fn(x; y) is the d.f. of(S(1)n ; S

    (2)n ). We consider random indices

    N or (N; M) independent of the Xi; Yj and as before we set S(1)0 = S

    (2)0 = 0.

    For convenience, we also dene Fn;m(x; y) = P(S(1)n x; S

    (2)m y), n; m 1.

    The d.f. of the random vector of random sums (S(1)N ; S

    (2)N ) is given by

    G(x; y) =P1

    n=0pnFn(x; y), where pn = P(N = n). The tail is given by

    G(x; y) =1

    Xn=1pnFn(x; y).

    If we have dierent random indices for each of the components, we study the

    random vector of random sums (S(1)N ; S0(2)M ). In this case we have H(x; y) =P1

    n=0

    P1m=0pn;mFn;m(x; y), where pn;m = P(N = n; M = m). The tail is

    given by

    H(x; y) =1X

    n=0

    1Xm=0

    pn;mFn;m(x; y).

    As in the univariate case we use integrated tails as follows. For F we dene mFas

    mF(x; y) =

    Zx0

    Zy0

    F(u; v)dudv, and

    mi(x) = Zx0

    Fi(x)dx; i = 1; 2.

    Using F(x; y) F1(x)+F2(y) and Fi(xi) F(x1; x2), we obtain that mF(x; y)satises

    mF(x; y) ym1(x) + xm2(y),

    16

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    17/28

    and thatym1(x) mF(x; y), xm2(y) mF(x; y).

    As before, we always have xyF(x; y) mF(x; y).

    3.2 General inequalities

    To obtain some general inequalitities, rst observe that

    max(1 P(S(1)n x); 1 P(S(2)m x)) 1 Fn;m(x; y),

    and that1 Fn;m(x; y) 1 P(S

    (1)n x) + 1 P(S

    (2)m x).

    Among others, it follows that

    G(x; y) G1(x) + G2(y),

    where G1(x) = P(S(1)N x) and G2(y) = P(S

    (2)N y). For H(x; y), we nd

    thatH(x; y) H1(x) + H2(y),

    where H1(x) = P(S(1)N x) and H2(y) = P(S

    (2)M y).

    Now we can use the univariate upper bounds Gi or Hi of the previous sectionto nd bivariate upper bounds for G or H.

    Using Lemma 4 and Theorem 6 we obtain the following result.

    Lemma 11 (11) (i) xyG(x; y) mG(x; y) 2E(N)mF(x; y)(ii) xyH(x; y) mH(x; y) (E(N) + E(M))mF(x; y)(iii) Suppose that Fi 2 ORV with (Fi) > 1, or that Fi 2 RV(i) with

    0 i < 1. Then, as min(x; y) ! 1, we have G(x; y) = O(1)F(x; y) andH(x; y) = O(1)F(x; y).

    Proof. (i) We have

    1 Fn;m(x; y) 1 P(S(1)n x) + 1 P(S

    (2)m x)

    nm1(x)=x + mm2(y)=y.

    From this it follows that

    mG(x; y) ymG1(x) + xmG2(y)

    E(N)(ym1(x) + xm2(y))

    2E(N)mF(x; y).

    (ii) In a similar way, for H we have

    mH(x; y) ymH1(x) + xmH2(y)

    E(N)ym1(x) + E(M)xm2(y)

    (E(N) + E(M))mF(x; y).

    17

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    18/28

    (iii) For G and H we have that

    xyG(x; y) 2E(N)mF(x; y),xyH(x; y) (E(N) + E(M))mF(x; y).

    Now, if Fi 2 ORV with (Fi) > 1, or if Fi 2 RV(i) with 0 i < 1, wehave mi(x) t xFi(x) and then it follows that

    xyF(x; y) mF(x; y) ym1(x) + xm2(y)

    C1yxF1(x) + C2xyF2(y)

    (C1 + C2)xyF(x; y).

    As min(x; y) ! 1, we nd that xyF(x; y) t mF(x; y). But then, as min(x; y) !1, we have G(x; y) = O(1)F(x; y). In a similar way we nd that H(x; y) =O(1)F(x; y).

    In Lemma 9 (iii) we found that G(x; y) and H(x; y) are bounded above byF(x; y). To obtain a lower bound, we prove the following general result, cf.Lemma 3(ii).

    Lemma 12 (12) For all n; m 1 we have

    lim inf min(x;y)!1

    1 P(S(1)n x; S

    (2)m y)

    1 F(x; y) min(n; m).

    Proof. First take n = m. We will prove the result by induction on n. Firstnote that for n = 1 the result holds. Assume the result holds for n = 1; 2;:::;kand consider the case where n = k + 1. We have

    1 Fk+1;k+1(x; y) = 1 Zxu=0

    Zyv=0

    Fk;k(x u; y v)dF(u; v)

    =

    Zxu=0

    Zyv=0

    (1 Fk;k(x u; y v))dF(u; v) + 1 F(x; y)

    (1 Fk;k(x; y))F(x; y) + (1 F(x; y)).

    By the induction step, we nd that

    liminf1 Fk+1;k+1(x; y)

    1 F(x; y) k + 1.

    Hence the result follows.Now take m = n + k, k 1 and write S

    (2)m = S

    (2)n + Rk. We have

    1 Fn;m(x; y) = 1 P(S(1)n x; S

    (2)n + Rk y)

    = 1

    Zy0

    Fn;n(x; y z)dFk(z),

    18

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    19/28

    where Fk(z) = P(Rk z). We nd that

    1 Fn;m(x; y) = Zy0

    (1 Fn;n(x; y z))dFk(z) + 1 Fk(y)

    (1 Fn;n(x; y))Fk(y) + 1 Fk(y)

    (1 Fn;n(x; y))Fk(y).

    Using the rst result, we obtain that

    liminf1 Fn;m(x; y)

    1 F(x; y) n.

    This proves the result.Going to random sums, we have the following Corollary.

    Corollary 13 (13) We have

    lim inf min(x;y)!1

    G(x; y)

    F(x; y) E(N),

    and

    lim inf min(x;y)!1

    H(x; y)

    F(x; y) E(min(N; M)).

    The next result is the bivariate analogue of Lemma 4.

    Lemma 14 (14) (i) If E(X) = E(Y) = 1, then, as min(x; y) ! 1, we havemG(x; y) t mF(x; y) and mH(x; y) t mF(x; y).

    (ii) If Fi 2 ORV with (Fi) > 1, or if Fi 2 RV(i) with 0 i < 1,then, as min(x; y) ! 1, we have G(x; y) t F(x; y) and H(x; y) t F(x; y).

    Proof. (i) In Lemma 11 we proved that mG(x; y) = O(1)mF(x; y) and mH(x; y) =O(1)mF(x; y). To prove (i), choose c and x

    such that

    H(x; y) cF(x; y), x; y x.

    Taking integrals, we obtain that

    mH(x; y) c

    Zxx

    Zyx

    F(u; v)dudv = c(mF(x; y) R),

    where

    R =Zx

    0

    Zyx

    +Zx

    x

    Zx0

    +Zx0

    Zx0

    !F(u; v)dudv

    = R(1) + R(2) + R(3).

    19

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    20/28

    For the rst term, we have

    R(1) Zx0

    Zyx

    (F1(u) + F2(v))dudv

    m1(x)y + xm2(y),

    and it follows that

    R(1)

    mF(x; y) m1(x

    )y

    mF(x; y)+ x

    m2(y)

    mF(x; y)

    m1(x)

    1

    m1(x)+ x

    1

    x! 0,

    as min(x; y) ! 1. In a similar way, we have R(2)=mF(x; y) ! 0. Since R(3),is bounded, we nally have R(3)=mF(x; y) ! 0. We conclude that

    liminf

    mH(x; y)

    mF(x; y) > 0.

    This proves the result.(ii) This follows from Lemma 11 and Lemma 13.Remarks.

    1) We need extra conditions to nd exact asymptotic results for G(x; y) andH(x; y). Such results will be discussed below.

    2) If in the place of mF, we can also consider kF-functions as follows:

    kF(x; y) =

    Zx0

    Zy0

    P(X > u; Y > v)dudv.

    Clearly we have P(X > u; Y > v) P(X > u) and P(X > u; Y > v) P(Y > v). It follows that

    kF(x; y) ym1(x) and kF(x; y) xm2(y).

    For (Sn; Tm), we obtain that

    kn;m(x; y) ynm1(x) and kn;m(x; y) xmm2(y),

    and then also

    kn;m(x; y) nmF(x; y) and kn;m(x; y) mmF(x; y),

    or kn;m(x; y) min(n; m)m(x; y). Taking random sums, we obtain that kH(s; y) E(min(N; M))m(x; y).

    3.3 Subexponential marginals

    In the next result we start from subexponential marginals F1(x) and F2(x).Then automatically F1; F2 2 L. In the next result we prove that we the jointd.f. is a multivariate subexponential d.f.. Multivariate subexponential d.f. havebeen studied by Cline and Resnick (1992), Mallor et al (2006). See also Mallorand Omey (2006) and Omey (2006). The next result extends Proposition 11 ofBaltrunas et al. (2006).

    20

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    21/28

    Theorem 15 (15) Suppose that F1(x) 2 S, F2(x) 2 S.Then for all n; m 1and as min(x; y) ! 1 we have

    (i)

    1 P(S(1)n x; S(2)m y) = (n m)

    +F1(x) + (m n)+F2(y) (4)

    + min(n; m)F(x; y) + o(1)F(x; y),

    and(ii)

    P(S(1)n > x; S(2)m > y ) = min(n; m)P(X > x; Y > y) + o(1)F(x; y). (5)

    Proof. We consider Fn;m(x; y) and rst assume that m = n+k, where n; k 1.Now consider the partial maxima

    M(1)n = max(X1; X2;:::;Xn),

    M(2)m = max(Y1; Y2;:::;Ym).

    We write 1 Fn;m(x; y) = An;m(x; y) + Bn;m(x; y), where

    An;m(x; y) = P(M(1)n x; M

    (2)m y) P(S

    (1)n x; S

    (2)m y),

    Cn;m(x; y) = 1 P(M(1)n x; M

    (2)m y).

    First consider An;m(x; y). Writing An;m(x; 1) = A1;n(x) and An;m(1; y) =A2;m(y) we have

    0 An;m(x; y) A1;n(x) + A2;m(y).

    For A1;n(x),we have

    A1;n(x) = Fn1 (x) F

    n1 (x) = R1;n(x) + B1;n(x),

    where

    R1;n(x) = 1 Fn1 (x) n(1 F1(x)), and

    B1;n(x) = 1 Fn1 (x) n(1 F1(x)).

    For B1;n(x) we use the inequality

    j1 xn n(1 x)j

    n

    2

    (1 x)2, 0 x 1, (6)

    to obtain that B1;n(x) = O(1)F21(x). Since F1(x) 2 S, we have R1;n(x) =

    o(1)F1(x). We conclude that

    A1;n(x) = o(1)F1(x) + O(1)F21(x).

    In a similar way, we can treat A2;m(y). Using F1(x) F(x; y) and F2(y) F(x; y), we nd that

    An;m(x; y) = o(1)F(x; y) + O(1)F2

    (x; y).

    21

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    22/28

    Now consider Cn;m(x; y). Since m = n + k, we nd that Cn;m(x; y) =1 Fn(x; y)Fk2 (y). Using (6) twice, we nd that

    Cn;m(x; y) = 1 (1 nF(x; y) + O(1)F2

    (x; y))(1 kF2(y) + O(1)F22(y)),

    and then it follows that

    Cn;m(x; y) = nF(x; y) + kF2(y) nkF(x; y)F2(y)

    +O(1)F2

    (x; y) + O(1)F22(y).

    Again using F1(x) F(x; y) and F2(y) F(x; y), this gives

    Cn;m(x; y) = nF(x; y) + kF2(y) + O(1)F2

    (x; y).

    We conclude that

    1 Fn;m(x; y) = nF(x; y) + kF2(y) + o(1)F(x; y) + O(1)F2

    (x; y).

    In a similar way, for n = m + k, m; k 1, we get that

    1 Fn;m(x; y) = mF(x; y) + kF1(x) + o(1)F(x; y) + O(1)F2

    (x; y).

    This proves (4). To prove (5), we use identity

    P(X > x; Y > y) = P(X > x) + P(Y > y ) (1 P(X x; Y y)). (7)

    Without assuming more, it is not clear which of the terms is dominant inthese expressions. It should be noted that this expression holds as min(x; y) !1. In the next section we will assume that min(x; y) ! 1 in a more preciseway.

    3.4 Regular variation

    Now assume that there exist functions a(t) and b(t) such that as t ! 1, wehave a(t) " 1 and b(t) " 1 and such that

    t(1 F(a(t)x; b(t)y)) ! (x; y) < 1, (8)

    for all x; y > 0 and min(x; y) < 1. Taking 0 < x < 1 and y = 1, we have

    t(1 F1(a(t)x) ! (x; 1).

    Replacing t by ai(t), the inverse of a(t),we have

    ai(t)(1 F1(tx)) ! (x; 1).

    If (x; 1) > 0, we nd that (x; 1) is of the form (x; 1) = cx, where

    c > 0, and then it follows that F1(x) 2 RV(). Moreover, also ai

    (t) andconsequently also a(t) are regularly varying functions.

    Relations of the type (8) were studied among others by de Haan et al (1983,1984) and Omey (1982, 1989, 1990).

    Using (8) and Theorem 15, we have the following result. It generalizes aresult of Omey (1990, Corollary 2.3).

    22

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    23/28

    Theorem 16 (16) Suppose that F1(x); F2(y) 2 S and suppose that (8) holds.Then

    (i)

    t(1 P(S(1)n a(t)x; S(2)m b(t)y))

    ! (n m)+(x; 1) + (m n)+(1; y) + min(n; m)(x; y),

    (ii)tP(S(1)n > a(t)x; S

    (2)m > b(t)y) ! min(n; m)(x; y),

    for allx;y > 0 and x + y > 0.

    Remark. If F1(x) 2 RV, then automatically F1(x) 2 S.

    Now consider the subordinated process and P(S(1)N x; S

    (2)M y). For the

    marginals, there are many situations (cf. Proposition 1) under which we have

    P(S(1)N > x) s E(N)F1(x), as x ! 1, (9)

    P(S(2)M > y) s E(M)F2(y), as y ! 1. (10)

    We prove the following result.

    Theorem 17 (17) Suppose that F1(x); F2(x) 2 S and suppose that (8), (9)and (10) hold. Then for all x;y > 0, we have

    (i) tP(S(1)N > a(t)x; S(2)M > b(t)y) ! E(min(N; M))(x; y),

    (ii)

    t(1 P(S(1)N a(t)x; S

    (2)M b(t)y) !

    E(N M)+(x; 1) + E(M N)+(1; y)

    +Emin(N; M)(x; y).

    Proof. We have

    P(S(1)N > x; S(2)M > y) =

    XXpn;mP(S(1)n > x;S(2)m > y)).Now observe the following facts:

    (1) pn;mtP(S(1)n > a(t)x; S

    (2)m > b(t)y)) ! pn;m min(n; m)(x; y);

    (2) pn;mtP(S(1)n > a(t)x; S

    (2)m > b(t)y)) pn;mtP(S

    (1)n > a(t)x);

    (3) pn;mtP(S(1)n > a(t)x) ! pn;mn(x; 1);

    (4)PP

    pn;mtP(S(1)n > a(t)x) = tP(S

    (1)N > a(t)x) ! E(N)(x; 1);

    (5)PP

    pn;mn(x; 1) = E(N)(x; 1).Using Pratts extension of Lebesgues theorem, we get that

    tP(S(1)N > a(t)x; S

    (2)M > b(t)y) ! E(min(N; M))(x; y).

    The second result follows from the rst result and (7), (9), (10).

    Remarks1) IfX and Y are independent and if (8) holds, we have

    tP(X > a(t)x; Y > b(t)y) = tP(X > a(t)x)P(Y > b(t)y) ! (x; 1) 0 = 0.

    In this case, using (7), we obtain that (x; y) = (x; 1) + (1; y).2) IfF(x; y) = min(F1(x); F2(y)) and (8) holds, then (x; y) = min((x; 1); (1; y)).

    23

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    24/28

    3.5 Second order behaviour

    In the univariate theory, it is possible to prove rate of convergence results inProposition 1. As an example, we mention some results of Omey and Willekens(1986, 1987), Willekens (1986). Related results are in Omey (1994), Baltrunasand Omey (1998, 2002), Baltrunas et al. (2006), Omey and Teugels (2002), andthe references given there. In the rst result we assume that = E(X) < 1and we use the notation

    R1;N(x) = P(S(1)N > x) E(N)F1(x).

    Proposition 18 (18) (Omey and Willekens, 1987) Assume that E((1+ )N) x; Y > y) = Axy (max(x; y)), x; y 1.

    In this example we have

    F1(x) = Ax, x 1,

    F2(y) = Ay, y 1,

    andF(x; y) = F1(x) + F2(y) Ax

    y(max(x; y)).

    2) As a second example, take F such that

    F(x; y) = max(F1(x); F2(y)) + (1 )(F1(x) + F2(y)),

    where 0 < < 1. ???

    To be continued

    4 References

    1. A. Baltrunas and E. Omey (1998), The rate of convergence for subexpo-nential distributions, Liet. matem. rink.38 (1), 1-18.

    2. A. Baltrunas and E. Omey (2002), The rate of convergence for subexpo-nential distributions and densities, Liet. matem. rink.42 (1), 1-18.

    3. A. Baltrunas, E. Omey and S. Van Gulck (2006). Hazard rates and subex-ponential distributions. Publ. Inst. Math. Bograd (N.S.) 80 (94, 29-46.

    25

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    26/28

    4. N.H. Bingham, C.M. Goldie and J.L. Teugels (1987). Regular Variation.Encyclopedia of Mathematics and Its Applications. Cambridge University

    Press, Cambridge.

    5. J. Chover, P. Ney and S. Wainger (1973a), Functions of Probability Mea-sures, J. Anal. Math. 26, 255-302.

    6. J. Chover, P. Ney and S. Wainger (1973b), Degeneracy properties of sub-critical branching processes, Ann. Prob. 1, 663-673.

    7. DB.H. Cline (1987), Convolutions of distributions with exponential andsubexponential tails. J. Australian Math. Soc. A43, 347-365.

    8. D.B.H. Cline and S.I. Resnick (1992), Multivariate subexponential distri-butions, Stochast. Process. Appl. 42, 49-72.

    9. V.P. Chistyakov (1964), A theorem on sums of independent positive ran-dom variables and its application to branching random processes. TheoryProbab.Appl. 9, 640-648.

    10. D.J. Daley, E. Omey and R.Vesilo (2007). The tail behaviour of a randomsum of subexponential random variables and vectors. Extremes 10, 21-39.

    11. P. Embrechts, C.M. Goldie and N. Veraverbeke (1979), Subexponentialityand innite divisibility, Z. Wahrsch. verw. Gebiete 49, 335-347.

    12. P. Embrechts and C.M. Goldie (1980), On closure and factorization prop-erties of subexponential and related distributions, J. Austral. Math. Soc.Ser. A 29, 243-256.

    13. P. Embrechts and C.M. Goldie (1982), On convolution tails, Stochast.Process. Appl. 13, 263-278.

    14. P. Embrechts (1985), Subexponential distribution functions and their ap-plications: a review, Proc. 7th. Conf. On Prob. Theory (Brasov, Rouma-nia), 125-136.

    15. P. Embrechts, C. Kluppelberg and T. Mikosch (1997), Modelling extremalevents, Appl. of Mathematics, Stochastic Modelling and Applied Proba-bility 33, Springer, New York.

    16. J. Geluk and L. de Haan (1987), Regular Variation, extensions and Tauberiantheorems, CWI tracts 40, Centre For Mathematics and Computer Science,Amsterdam.

    17. L. de Haan and E. Omey (1983), Integrals and derivatives of regularlyvarying functions in

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    27/28

    19. L. de Haan (1970). On regular variation and its application to the weakconvergence of sample extremes. Math. Centre Tract 32, Amsterdam.

    20. M.V. Johns (1957), Non-parametric empirical Bayes procedures, Ann.Math.Stat.28, 649-669.

    21. M. Maejima (1972). A generalization of Blackwells theorem for renewalprocesses to the case of non-identically distributed random variables. Rep.Statist. Appl. Res. JUSE 19, 1-9.

    22. F. Mallor and E. Omey (2006). Univariate and multivariate weighted re-newal theory. Collection of Monographies from the department of Statis-tics and Operations Research, N 2. Public University of Navarre, Spain(ISBN 84-9769-127-X).

    23. F.Mallor, E. Omey and J. Santos (2006). Multivariate subexponential dis-

    tributions and random sums of random vectors. Adv. Appl. Prob. 38(4),1028-1046.

    24. E. Omey (1982), Multivariate Reguliere Variatie en Toepassingen in kans-theorie, Ph.D. Thesis, K.U. Leuven (in Dutch).

    25. E. Omey (1989), Multivariate regular variation and applications in prob-ability theory, Eclectica 74, EHSAL, Brussels.

    26. E. Omey (1990), Random sums of random vectors, Publ. Inst. Math.Bograd (N.S.) 48(62), 191-198.

    27. E. Omey (2006). Subexponential distributions in

  • 8/14/2019 Random sums of random variables and vectors by E. Omey and R. Vesilo

    28/28

    34. A. Skucait (2004). Large deviations for sums of independent heavy-tailedrandom variables. Lithuanian Math.J. 44 (2), 198-208.

    35. E. Seneta (1976), Functions of Regular variation, Lecture Notes in Math-ematics 506, Springer-Verlag, New York.

    36. V.V. Shneer (2004). Estimates for the distributions of the sums of subex-ponential random variables. Sib. Math.J. 45 (6), 1143-1158.

    37. A. Stam (1973), Regular variation of the tail of a subordinated probabilitydisribution. Adv. Appl. Prob. 5, 308-327.

    38. J.L. Teugels (1975), The class of subexponential distributions, Ann.Prob.3, 1000-1011.

    39. E. Willekens (1986) Higher-order theory for subexponential distributions,

    Ph.D. Thesis, K.U.Leuven (in Dutch).

    28