independent symmetric random variables Abstract

27
Moment inequalities for sums of certain independent symmetric random variables by P. HITCZENKO* (Raleigh, NC), S. J. MONTGOMERY-SMITH (Columbia, MO) and K. OLESZKIEWICZ (Warszawa) Abstract This paper gives upper and lower bounds for moments of sums of independent random variables (X k ) which satisfy the condition that P (|X | k t) = exp(-N k (t)), where N k are concave functions. As a consequence we obtain precise information about the tail probabilities of linear combinations of independent random variables for which N (t)= |t| r for some fixed 0 <r 1. This complements work of Gluskin and Kwapie´ n who have done the same for convex functions N . 1. Introduction. Let X 1 ,X 2 ... be a sequence of independent random variables. In this paper we will be interested in obtaining estimates on the L p -norm of sums of (X k ), i.e. we will be interested in the quantity, X X k p = E X X k p 1/p , 2 p< . Although using standard symmetrization arguments our results carry over to more general cases, we will assume for the sake of simplicity that X k have symmetric distributions, i.e. that P (X k t)= P (-X k t) for all t R. Let us start by considering the case of linear combinations of identically distributed, independent (i.i.d.) random variables, that is X k = a k Y k , where Y,Y 1 ,... are i.i.d. We can assume without loss of generality that all a k ’s are nonnegative. Also, since the Y k ’s are i.i.d., we can rearrange the terms of (a k ) arbitrarily without affecting the sum a k Y k . Therefore, for notational convenience, we will adopt the following convention throughout this paper; whenever we are dealing with a sequence of real numbers we will always assume that its terms are nonnegative and form a nonincreasing sequence. In other words we identify a sequence (a k ) with the decreasing rearrangement of (|a k |). Of course, a huge number of inequalities concerning the L p -norm of a sum a k Y k are known. Let us recall two of them. The first, called Khintchine’s inequality, deals with the Rademacher sequence, i.e. Y = , where takes values ±1, each with probability 1/2. AMS 1991 Subject Classification: 60E15, 60G50 * Supported in part by NSF grant DMS 9401345 Supported in part by NSF grant DMS 9424396 Supported in part by Foundation for Polish Science and KBN Grant 2 P301 022 07 1

Transcript of independent symmetric random variables Abstract

Page 1: independent symmetric random variables Abstract

Moment inequalities for sums of certainindependent symmetric random variables

by

P. HITCZENKO* (Raleigh, NC), S. J. MONTGOMERY-SMITH† (Columbia, MO)and K. OLESZKIEWICZ ‡ (Warszawa)

Abstract

This paper gives upper and lower bounds for moments of sums of independent randomvariables (Xk) which satisfy the condition that P (|X|k ≥ t) = exp(−Nk(t)), where Nkare concave functions. As a consequence we obtain precise information about the tailprobabilities of linear combinations of independent random variables for which N(t) = |t|rfor some fixed 0 < r ≤ 1. This complements work of Gluskin and Kwapien who have donethe same for convex functions N .

1. Introduction.

Let X1, X2 . . . be a sequence of independent random variables. In this paper we will beinterested in obtaining estimates on the Lp-norm of sums of (Xk), i.e. we will be interestedin the quantity, ∥∥∥∑Xk

∥∥∥p

=(E∣∣∣∑Xk

∣∣∣p)1/p

, 2 ≤ p <∞.

Although using standard symmetrization arguments our results carry over to more generalcases, we will assume for the sake of simplicity that Xk have symmetric distributions, i.e.that P (Xk ≤ t) = P (−Xk ≤ t) for all t ∈ R.

Let us start by considering the case of linear combinations of identically distributed,independent (i.i.d.) random variables, that is Xk = akYk, where Y, Y1, . . . are i.i.d. Wecan assume without loss of generality that all ak’s are nonnegative. Also, since the Yk’sare i.i.d., we can rearrange the terms of (ak) arbitrarily without affecting the sum

∑akYk.

Therefore, for notational convenience, we will adopt the following convention throughoutthis paper; whenever we are dealing with a sequence of real numbers we will always assumethat its terms are nonnegative and form a nonincreasing sequence. In other words weidentify a sequence (ak) with the decreasing rearrangement of (|ak|).

Of course, a huge number of inequalities concerning the Lp-norm of a sum∑akYk

are known. Let us recall two of them. The first, called Khintchine’s inequality, deals withthe Rademacher sequence, i.e. Y = ε, where ε takes values ±1, each with probability 1/2.

AMS 1991 Subject Classification: 60E15, 60G50* Supported in part by NSF grant DMS 9401345† Supported in part by NSF grant DMS 9424396‡ Supported in part by Foundation for Polish Science and KBN Grant 2 P301 022 07

1

Page 2: independent symmetric random variables Abstract

For p ≥ 2 it can be formulated as follows (see e.g. Ledoux and Talagrand (1991, Lemma4.1)); there exists an absolute constant K such that(∑

a2k

)1/2

≤∥∥∥∑ akεk

∥∥∥p≤ K√p

(∑a2k

)1/2

.

The value of the smallest constant that can be put in place of K√p is known (see Haagerup

(1982)). The second inequality, was proved by Rosenthal (1970), and in our generality saysthat for 2 ≤ p <∞ there exists a constant Bp such that for a sequence a = (ak) ∈ `2,

max{‖a‖2 ‖Y ‖2 , ‖a‖p ‖Y ‖p} ≤∥∥∥∑ akYk

∥∥∥p≤ Bp max{‖a‖2 ‖Y ‖2 , ‖a‖p ‖Y ‖p},

where ‖a‖s denotes the `s-norm of a sequence a. The best possible constant Bp is known(cf. Utev (1985) for p ≥ 4, and Figiel, Hitczenko, Johnson, Schechtman and Zinn (1995)for 2 < p < 4). We refer the reader to the latter paper and references therein for moreinformation on the best constants in Rosenthal’s inequality. For our purpose, we only notethat Johnson, Schechtman and Zinn (1983) showed that Bp ≤ Kp/ log p for some absoluteconstant K, and that up to the value of K, this bound cannot be further improved. This,and other related results, were extended in various directions by Pinelis (1994). Whenspecialized to our generality, his inequality reads as

max{‖a‖2 ‖Y ‖2 , ‖a‖p ‖Y ‖p} ≤∥∥∥∑ akYk

∥∥∥p≤ inf

1≤c≤pmax{

√cep/c ‖a‖2 ‖Y ‖2 , c ‖a‖p ‖Y ‖p}.

A common feature of these two inequalities is that they express the Lp-norm of a sum∑akYk in terms of norms of individual terms akYk. This is very useful and both inequal-

ities found numerous applications. However, upper and lower bounds in these inequalitiesdiffer by a factor that depends on p, and sometimes are quite insensitive to the structureof the coefficient sequence. (Rosenthal’s inequality is also insensitive to the common dis-tribution of Yk’s.) Consider, for example coefficient sequence ak = 1/k, k ≥ 1. Then, theonly information on ‖

∑akεk‖p given by Khintchine’s inequality is that it is essentially be-

tween 1 and√p. Practically the same conclusion is given for two quite different sequences,

namely ak = 2−k, k ≥ 1, and ak = 1/√n or 0 according to whether k ≤ n or k > n, n ∈ N.

(The “true values” of ‖∑akεk‖p are rather different in each of these three cases, and are

of order log(1 + p), 1, and√p ∧ n, respectively.)

¿From this point of view, it is natural to ask whether more precise information on thesize of ‖

∑akYk‖p can be obtained. Although, in general the answer to this question may

be difficult, there are cases for which there is a satisfactory answer. First of all, if Yk is astandard Gaussian random variable, then ‖

∑akYk‖p = ‖a‖2 ‖Y ‖p so that

c√p ‖a‖2 ≤

∥∥∥∑ akYk

∥∥∥p≤ C√p ‖a‖2 ,

for some absolute constants c and C. Next consider the case when (Yk) = (εk), aRademacher sequence. We have:

c

∑k≤p

ak +√p

∑k>p

a2k

1/2 ≤

∥∥∥∑ akεk

∥∥∥p≤ C

∑k≤p

ak +√p

∑k>p

a2k

1/2 .

2

Page 3: independent symmetric random variables Abstract

(Recall, that according to our convention, a1 ≥ a2 ≥ . . . ≥ 0.) The above inequalityhas been established in Hitczenko (1993), although the proof drew heavily on a techniquefrom Montgomery-Smith (1990). (In fact, the inequality for Rademacher variables can bededuced from the results obtained in the latter paper.) The next step was done by Gluskinand Kwapien (1995) who dealt with the case of random variables with logarithmicallyconcave tails. To describe their result precisely, suppose that Y is a symmetric randomvariable such that for t > 0 one has P (|Y | ≥ t) = exp(−N(t)), where N is an Orliczfunction (i.e. convex, nondecreasing , and N(0) = 0). Recall, that if M is an Orliczfunction and (ak) is a sequence of scalars then the Orlicz norm of (ak) is defined by‖(ak)‖M = inf{u > 0 :

∑M(ak/u) ≤ 1}. Let N ′ be a function conjugate to N i.e.

N ′(t) = sup{st −N(s) : s > 0}, and put Mp(t) = N ′(pt)/p. Then Gluskin and Kwapien(1995) proved that

c{‖(ak)k≤p‖Mp

+√p ‖(ak)k>p‖2

}≤∥∥∥∑ akYk

∥∥∥p≤ C

{‖(ak)k≤p‖Mp

+√p ‖(ak)k>p‖2

}.

In the special case N(t) = |t|r, r ≥ 1, this gives

c{‖(ak)k≤p‖r′ ‖Y ‖p +

√p ‖(ak)k>p‖2 ‖Y ‖2

}≤∥∥∥∑ akYk

∥∥∥p

≤C{‖(ak)k≤p‖r′ ‖Y ‖p +

√p ‖(ak)k>p‖2 ‖Y ‖2

},

where 1/r′ + 1/r = 1, and c, C are absolute constants. From now on, we will refer torandom variables corresponding to N(t) = |t|r as symmetric Weibull random variableswith parameter r.

Now let us describe the results of this paper. We will say that a symmetric randomvariable has a logarithmically convex tail if P (|X| ≥ t) = exp(−N(t)) for t ≥ 0, whereN : R+ → R+ is a concave function with N(0) = 0. We will show the following result.

Theorem 1.1. There exist absolute constants c and K such that if (Xk) is a sequence ofindependent random variables with logarithmically convex tails, then for each p ≥ 2,

c{

(∑‖Xk‖pp)1/p+

√p(∑‖Xk‖22)1/2

}≤‖∑

Xk‖p ≤ K{

(∑‖Xk‖pp)1/p +

√p(∑‖Xk‖22)1/2

}.

This result includes the special case of linear combinations of symmetric Weibullrandom variables with fixed parameter r ≤ 1, and in this case we are also able to obtaininformation about the tail distributions of the sums. In fact, we will give a second proofof the moment inequalities in this special case. We include this second proof, because webelieve that the methods will be of great interest to the specialists.

Let us mention that the main difficulty is to obtain tight upper bounds. Once thisis done the lower bounds are rather easy to prove. For this reason, the main emphasis ofthis paper is on the proofs of upper bounds.

We will now outline the main steps in both proofs of the upper bounds, and at thesame time describe the organization of this paper. In Section 2, we describe a weaker

3

Page 4: independent symmetric random variables Abstract

version of the result that depends upon a certain moment condition. In particular, ifwe specialise to linear combinations of symmetric Weibull random variables, we obtainconstants that become unbounded as the parameter r tends to zero (but are still universalin p ≥ 2). The proof is based on hypercontractive methods that were suggested to us byKwapien. In Section 3 we will show that moments of sums of symmetric random variableswith logarithmeically convex tails are dominated by the linear combinations of suitablychosen multiples of standard exponential random variables by independent symmetric threevalued random variables. We are then able to obtain the main result.

The main idea of the second proof is to reduce the problem to the situation to thecase of an i.i.d. sum. This is done in the Section 4. In Section 5 we deal with theproblem of finding an upper bound for the Lp-norm of a sum of i.i.d. random variables.We accomplish this by estimating from above a decreasing rearrangement of the sum interms of a decreasing rearrangement of an individual summand. In Section 6 we applythe results from the preceding two sections to obtain an upper bound on the Lp-norm of∑akXk, where (Xk) are i.i.d. symmetric with P (|X| > t) = exp(−tr), 0 < r ≤ 1.

2. Random variables satisfying moment condition.In this section we will prove a variant of Theorem 1.1, where the random variables satisfy acertain moment condition. We use hypercontractive methods, similar to those in Kwapienand Szulga (1991). This approach was suggested to us by S. Kwapien. It should beemphasized that if the result is specialized to symmetric Weibull random variables thenthe constant B in Theorem 2.2 below depends on r. Let us begin with the following.

Definition 2.1. We say that a random variable X satisfies moment condition if it issymmetric, all moments of X are finite and there exist positive constants b, c such that forall even natural numbers n ≥ k ≥ 2

1b

n

k≤ ‖X‖n‖X‖k

≤ cn−k.

It is easy to check that the existence of such a c is equivalent to the finiteness of

supn∈N

(E|X|n)1/n2

orlim sup

t→∞(ln t)−2 ln(P (|X| > t) < 0.

Here is the main result of this section.

Theorem 2.2. If independent real random variables X1, X2, . . . satisfy moment conditionwith the same constants b and c then there exist positive constants A and B such that forany p ≥ 2, any natural number m, and S =

∑mi=1Xi, the following inequalities hold:

A(√

p‖S‖2 +( m∑k=1

‖Xk‖pp)1/p) ≤ ‖S‖p ≤ B(√p‖S‖2 +

( m∑k=1

‖Xk‖pp)1/p)

.

We will prove a lemma first.

4

Page 5: independent symmetric random variables Abstract

Lemma 2.3. If X satisfies moment condition then there exists positive constant K suchthat for any q ≥ 1 and any real x the following inequality holds true:

(E|x+X|2q)1/q ≤ 4eEX2q + (|x|2q +K2qE|X|2q)1/q.

Proof: Put K =√

2ebc4 + 1. Let n be the least natural number greater than q. Weconsider two cases:(i) |x| ≤ (K − 1)‖X‖2q. Then ‖x+X‖2q ≤ |x|+ ‖X‖2q ≤ K‖X‖2q.(ii) |x| ≥ (K − 1)‖X‖2q ≥ K−1

c2 ‖X‖2n. Then

E(x+X)2n − x2n =n∑k=1

2n(2n− 1) . . . (2n− 2k + 1)(2k)!

x2n−2kEX2k

≤4n2x2n−2n∑k=1

(2n)2k−2‖X‖2k2k(2k)!x2k−2

≤4n2x2n−2n∑k=1

‖X‖22k(2n)2k−2

(2k)!(K−1c2 )2k−2

(‖X‖2k‖X‖2n

)2k−2

≤4n2x2n−2n∑k=1

(‖X‖2c2k−2)2 (2n)2k−2( bkn )2k−2

(2k)!(K−1c2 )2k−2

=4n2x2n−2n∑k=1

EX2

(2k)!

(2bc4kK − 1

)2k−2

≤n2x2n−2eEX2n∑k=1

(ebc4

K − 1

)2k−2

≤2eEX2n2x2n−2.

Hence

‖x+X‖2q ≤ ‖x+X‖2n ≤√x2 + 2eEX2n ≤

√x2 + 4eEX2q.

This completes the proof of Lemma 2.3.

Proof of Theorem 2.2: We begin with the left hand side inequality. The inequality

‖S‖p ≥ (m∑k=1

E|Xk|p)1/p

follows from Rosenthal’s inequality (Rosenthal (1970)), or by an easy induction argumentand the inequality 2(|x|p + |y|p) ≤ |x+ y|p + |x− y|p. To complete the proof of this partwe will show that if X is a random variable satisfying the left hand side inequality inDefinition 2.1 with constant b then

‖S‖p ≥1

2√

2b‖S‖2

√p.

5

Page 6: independent symmetric random variables Abstract

To this end let G be a standard Gaussian random variable. Then, for every even naturalnumber n we have

‖X‖22b‖G‖n = ((n− 1)!!)1/n ‖X‖2

2b≤ n‖X‖2

2b≤ ‖X‖n.

Hence, by the binomial formula, for every even natural number n and any real number x

E|x+‖X‖2

2bG|n ≤ E|x+X|n.

If Gk’s are i.i.d. copies of G then, by an easy induction argument, we get that

E|x+‖S‖2

2bG|n = E|x+

∑ ‖Xk‖22b

Gk|n ≤ E|x+ S|n.

Letting n be the largest even number not exceeding p, and putting x = 0 we obtain

‖S‖p ≥ ‖S‖n ≥‖S‖2

2b‖G‖n ≥

‖S‖22b√n ≥ 1

2√

2b‖S‖2

√p,

which completes the proof of the first inequality of Theorem 2.2.To prove the second inequality we will proceed by induction. For N = 1, 2, . . . ,m, let

SN =∑Nk=1Xk, hN = K(

∑Nk=1E|Xk|p)1/p, and q = p/2. We will show that for any real

x(E|x+ SN |2q)1/q ≤ 4eqES2

N + (|x|2q + h2qN )1/q.

For N = 1 this is just Lemma 2.3. Assume that the above inequality is satisfied for N < m.Let E′ = E( · |X1, X2, . . . , XN ), E′′ = E( · |XN+1). Then

(E|x+ SN+1|2q)1/q = (E′′E′|(x+ SN ) +XN+1|2q)1/q ≤

by Lemma 2.3

≤(E′(4eqEX2N+1 + (|x+ SN |2q +K2qE|XN+1|2q)1/q)q)1/q

≤4eqEX2N+1 + (E′|x+ SN |2q +K2qE|XN+1|2q)1/q

by the inductive assumption

≤ 4eqEX2N+1+((4eqES2

N + (|x|2q + h2qN )1/q)q +K2qE|XN+1|2q)1/q

=4eqEX2N+1 + ‖(4eqES2

N + (|x|2q + h2qN )1/q,K2(E|XN+1|2q)1/q)‖q

≤4eqEX2N+1 + ‖(4eqES2

N , 0)‖q + ‖((|x|2q + h2qN )1/q,K2(E|XN+1|2q)1/q)‖q

=4eqES2N+1 + (|x|2q + h2q

N+1)1/q.

Our induction is finished. Taking N = m and x = 0 we have

‖S‖p ≤√

2epES2 + h2m ≤

√2e‖S‖2

√p+K(

m∑k=1

E|Xk|p)1/p.

This completes the proof of Theorem 2.2.

6

Page 7: independent symmetric random variables Abstract

3. Random variables with logarithmically convex tails.The aim of this section is to prove the main result, Theorem 1.1. Although randomvariables with logarithmically convex tails do not have to satisfy moment condition wehave the following.

Proposition 3.1. Let Γ be an exponential random variable with density e−x for x ≥ 0. IfX is a symmetric random variable with logarithmically convex tails then for any p ≥ q > 0the following inequality is satisfied:

‖X‖p‖Γ‖p

≥ ‖X‖q‖Γ‖q

.

In particular, random variables with logarithmically convex tails satisfy the first inequalityof Definition 2.1 with constant b = e/

√2.

Proof: Let F = N−1. Then |X| has the same distribution as F (Γ). Since N is a concavefunction and N(0) = 0, it follows that N(x)/x is nonincreasing. Therefore, f(x) = F (x)/xis nondecreasing. By a standard inequality for a measure xpe−xdx we have

(

∫∞0fp(x)xpe−xdx∫∞0xpe−xdx

)1/p ≥ (

∫∞0fq(x)xpe−xdx∫∞0xpe−xdx

)1/q,

so it suffices to prove that∫∞0f(x)qxpe−xdx∫∞0xpe−xdx

≥∫∞

0f(x)qxqe−xdx∫∞0xqe−xdx

.

Since fq(x) is a nondecreasing function, it is enough to show that∫∞axpe−xdx

Γ(p+ 1)≥∫∞axqe−xdx

Γ(q + 1), a ≥ 0.

Let

h(a) =

∫∞axpe−xdx

Γ(p+ 1)−∫∞axqe−xdx

Γ(q + 1).

Then, h(0) = 0 and lima→∞ h(a) = 0. Since h′(a) is positive on (0, a0), and negative on(a0,∞), where a0 = (Γ(p+1)

Γ(q+1) )1/(p−q), we conclude that h(a) ≥ 0. To see that b = e/√

2,notice that the sequence (n/‖Γ‖n) = ((nn/n!)1/n) is increasing to e. Therefore, since2/‖Γ‖2 =

√2,

‖X‖n‖X‖k

≥ ‖Γ‖n‖Γ‖k

≥ n

k

√2e.

This completes the proof.

Since the left hand side inequality in Theorem 1.1 follows by exactly the same argu-ment as in Theorem 2.3, we will concentrate on the right hand side inequality. We firstestablish a comparison result which may be of its own interest. Let Γ be an exponentialrandom variable with mean 1. For a random variable X with logarithmically convex tails,and p > 2, we denote by Ep(X) a random variable distributed like aΘΓ, where Θ and Γare independent, Θ is symmetric, 3-valued (i.e. P (Θ = ±1) = α/2, P (Θ = 0) = 1 − α),and a, α are chosen so that ‖X‖2 = ‖Ep(X)‖2 and ‖X‖p = ‖Ep(X)‖p. Proposition 3.1.guarantees that such a and α exist. We begin with the following lemma.

7

Page 8: independent symmetric random variables Abstract

Lemma 3.2. There exists an absolute constant c such that for any random variable withlogarithmically convex tails X and for every 2 ≤ q < p <∞ one has

‖X‖q ≤ c‖Ep(X)‖q.

Proof: Replacing X by X/a we can assume a = 1. Assume first that 4 ≤ q ≤ p/2. Wehave

‖X‖qq = q

∫ ∞0

tq−1e−N(t)dt = q(∫ 2

0

+∫ p

2

+∫ ∞p

)tq−1e−N(t)dt.

We will estimate each integral separately. By Markov’s inequality, for every t > 0 we have

(∗) tpe−N(t) ≤ ‖X‖pp = ‖Θ‖pp‖Γ‖pp.

Hence, ∫ ∞p

tq−1e−N(t)dt ≤‖Θ‖pp‖Γ‖pp∫ ∞p

tq−1−pdt ≤ ‖Θ‖ppΓ(p+ 1)pq−p

p− q

≤‖Θ‖qq2ppq

ep≤ Kq‖Θ‖qqqq ≤ Kq‖Θ‖qq‖Γ‖qq.

We estimate the first integral in a similar way; since for every t > 0

(∗∗) t2e−N(t) ≤ ‖Θ‖22‖Γ‖22,

we get ∫ 2

0

tq−1e−N(t)dt ≤ ‖Θ‖22‖Γ‖22∫ 2

0

tq−3dt ≤ Cq‖Θ‖22‖Γ‖qq = Cq‖Θ‖qq‖Γ‖qq.

It remains to estimate the middle integral. Notice, that (∗) with t = p and (∗∗) with t = 2imply that

N(p) ≥ p ln(p/‖Γ‖p)− lnα ≥ p ln(e/2)− lnα,

andN(2) ≥ 2 ln(e/2)− lnα.

Hence, by concavity of N , we infer that N(t) ≥ t ln(e/2)−lnα, for 2 ≤ t ≤ p. Consequently,∫ p

2

tq−1e−N(t)dt ≤ α∫ ∞

0

tq−1e−t ln(e/2)dt ≤ Kq‖Θ‖qq‖Γ‖qq.

Now consider the case 2 ≤ q < 4. Since for q < p, ‖Γ‖q ≥ kq/p‖Γ‖p for some absoluteconstant k > 0, and for Θ Holder’s inequality is in fact equality, we obtain the following.If 0 < s < 1 is chosen so that 1/q = s/(2q) + (1− s)/2, then

‖ΘΓ‖q =‖Θ‖s2q‖Γ‖sq‖Θ‖1−s2 ‖Γ‖1−sq ≥ ‖Θ‖s2q(k

2‖Γ‖2q

)s‖Θ‖1−s2 ‖ G‖1−s2

≥k2‖ΘΓ‖s2q‖ΘΓ‖1−s2 ≥ k

2(‖X‖2q/c1)s‖X‖1−s2

≥k2c1‖X‖q.

8

Page 9: independent symmetric random variables Abstract

Here c1 denotes an absolute constant obtained in the first part of proof (as 2q ≥ 4). Sincethe similar argument (with 1/q = s/p + (1 − s)/2) works for p/2 ≤ q ≤ p, the proof iscompleted.

In the remainder of this paper we will assume that if (Xk) are independent, then thecorresponding variables Ek’s, are independent, too. We have the following.

Proposition 3.3. There exists an absolute constant K such that if (Xk) is a sequenceof independent random variables with logarithmically convex tails and for p > 2 (Ek) =(Ep(Xk)) is the corresponding sequence of (ΘkΓk), then

‖∑

Xk‖p ≤ K‖∑Ek‖p.

Proof: Let q ∈ N be an integer, p/2 ≤ 2q < p. It follows from Kwapien and Woyczynski(1992, Proposition 1.4.2) (cf. Hitczenko (1994, Proposition 4.1) for details) that if 2q ≤ p,and (Zk) is any sequence of independent, symmetric random variables, then

‖∑

Zk‖p ≤ Kp

2q{‖∑

Zk‖2q + ‖max |Zk|‖p}≤ K p

2q{‖∑

Zk‖2q + (∑‖Zk‖pp)1/p

}.

Therefore,‖∑

Xk‖p ≤ K{‖∑

Xk‖2q +(∑

‖Xk‖pp)1/p}

.

It follows from the binomial formula and Lemma 3.2 that

‖∑

Xk‖2q ≤ c‖∑Ek‖2q.

Hence we obtain,

‖∑

Xk‖p ≤ K{‖∑Ek‖2q +

(∑‖Ek‖pp

)1/p} ≤ K‖∑ Ek‖p,

as required.

Remark 3.4. The above proposition is similar in spirit to a result of Utev (1985), whoproved that if p > 4, (Yk) is a sequence of independent symmetric random variables, and(Zk) is a sequence of independent symmetric three-valued random variables, whose secondand pth moments are the same as Yk’s, then

‖∑

Yk‖p ≤ ‖∑

Zk‖p.

We do not know what is the best constant K in Proposition 3.3. We do know, however,that if p is an even number, then K ≤ e.

In order to finish the proof of Theorem 1.1 we need one more lemma.

9

Page 10: independent symmetric random variables Abstract

Lemma 3.5. There exists a constant K, such that if Θ is a symmetric 3-valued randomvariable with P (|Θ| = 1) = δ = 1− P (ξ = 0), independent of Γ, then for every q ≥ 1 andx ∈ R we have (

E|x+ ΘΓ|2q)1/q ≤ eqE(ΘΓ)2 +

(|x|2q +K2qE|ΘΓ|2q

)1/q.

Proof: We need to prove that

(|x|2q + δ(E|x+ Γ|2q − |x|2q)

)1/q ≤ 2eqδ +(|x|2q + δK2qEΓ2q

)1/q.

Set for simplicity A = E|x + Γ|2q − |x|2q and B = K2qEΓ2q. Since the case A ≤ B istrivial, assume A > B. It suffices to show that

φ(Aδ)− φ(Bδ)δ

≤ 2eq,

where φ(t) = (x2q + t)1/q. Since φ is a concave function, the left hand side above isdecreasing in δ, so it suffices to check that

2eq ≥ limδ→0

φ(Aδ)− φ(Bδ)δ

= Aφ′(0)−Bφ′(0) =1q

(A−B)x2−2q.

But this equivalent to

E|x+ Γ|2q − x2q ≤ 2eq2x2q−2 +K2qEΓ2q = eq2x2q−2EΓ2 +K2qEΓ2q.

The latter inequality follows from the proof of Lemma 2.3 since exponential variable sat-isfies moment condition. (A direct proof, giving K = e2 can be given, too.) The proof iscompleted.

Proof of Theorem 1.1 is now very easy. By Proposition 3.3 it suffices to prove the resultfor the sequence (Ek). But, in view of the previous Lemma, and the proof of Theorem 2.2we get that

‖∑Ek‖p ≤ K

{(∑‖Ek‖pp

)1/p +√p(∑

‖Ek‖22)1/2}

,

which completes the proof.

10

Page 11: independent symmetric random variables Abstract

4. Lp-domination of linear combinations by i.i.d. sums.

The aim of this section is to prove the following

Theorem 4.1. Let (Yk) be i.i.d. symmetric random variables. For a sequence of scalars(ak) ∈ `2, we let m = d(‖a‖2/‖a‖p)2p/(p−2)e, where dxe denotes the smallest integer noless than x. Then we have

‖∑

akYk‖p ≤ K‖a‖pm1/p

∥∥m∨p∑k=1

Yk∥∥p,

for some absolute constant K.

Remark 4.2.(i) This result is related to an inequality obtained by Figiel, Iwaniec and Pe lczynski

(1984, Proposition 2.2′). They proved that if (ak) is a sequence of scalars, and (Yk)are symmetrically exchangeable then

‖n∑k=1

akYk‖p ≤(∑nk=1 |ak|p)1/p

n1/p

∥∥ n∑k=1

Yk∥∥p.

Although we do not get constant 1, ourm is generally smaller than n. Also, (cf.Marshalland Olkin (1979, Chapter 12.G)) for certain random variables (Yk), including theRademacher sequence, one has

‖n∑k=1

akYk‖p ≤(∑nk=1 |ak|2)1/2

√n

∥∥ n∑k=1

Yk∥∥p.

Ourm is chosen so that the `2 and `p norms of a new coefficient sequence are essentiallythe same as those of the original one.

(ii) If, roughly, the first p coefficients in the sequence (ak) are equal, then automaticallym ≥ cp, and thus m ∨ p in the upper limit of summation can be replaced by m. Thisfollows from the fact that if a1 ≥ a2 ≥, . . . ,≥ 0, then the ratio

∑kj=1 a

2j

(∑kj=1 a

pj )2/p

is increasing in k. This observation will be important in Section 6.

We will break up the proof of Theorem 4.1 into several propositions. Recall that for arandom variable Z ∈ L2q, q ∈ N, EZ2q = (−1)qφ(2q)

Z (0), where φZ is a characteristicfunction of Z and φ

(2q)Z its 2qth derivative.

11

Page 12: independent symmetric random variables Abstract

Proposition 4.3. Let (Yk) be a sequence of i.i.d. symmetric random variables such that

(−1)l(lnφY )(2l)(0) ≥ 0, for l = 1, . . . , q.

Suppose that a and b are two sequences of real numbers such that ‖a‖2l ≤ ‖b‖2l forl = 1, . . . , q. Then

E(n∑k=1

akYk)2q ≤ E(n∑k=1

bkYk)2q.

Proof: Let S =∑nk=1 akYk. We will show that (−1)q(φS)(2q)(0) is an increasing function

of ‖a‖2l, l = 1, . . . , q. We have

φS(t) = exp{lnφS(t)} = exp{∑k

lnφakY (t)}.

Differentiating once and then 2q − 1 times using Leibnitz formula we get

φ(2q)S =

∑k

(φS ln′ φakY )(2q−1) =∑k

2q−1∑j=0

(2q − 1j

)(lnφakY )(j+1)φ

(2q−1−j)S .

Hence, evaluating at zero, and using φ(j)S (0) = 0, for odd j’s, we get

φ(2q)S (0) =

∑k

2q−1∑j=0

(2q − 1j

)(lnφakY )(j+1)(0)φ(2q−1−j)

S (0)

=∑k

q∑j=1

(2q − 12j − 1

)a2jk (lnφY )(2j)(0)φ(2(q−j))

S (0),

and it follows that

‖S‖2q2q = (−1)qφ(2q)S (0) =

q∑j=1

‖a‖2j2j(

2q − 12j − 1

)(−1)j(lnφY )(2j)(0)‖S‖2(q−j)

2(q−j).

Since we assume that (−1)j(lnφY )(2j)(0) ≥ 0, the result follows by induction on q.

Proposition 4.4. Let p ≥ 2. Suppose that a and b are two sequences of real numbers suchthat ‖a‖s ≤ ‖b‖s for 2 ≤ s ≤ p. Let (Yk) be i.i.d. symmetric such that (−1)l(lnφY )(2l)(0) ≥0, for all l ≤ p. Then

‖n∑k=1

akYk‖p ≤ K‖n∑k=1

bkYk‖p,

for some absolute constant K.

Proof: It follows from Kwapien and Woyczynski (1992, Proposition 1.4.2) (cf. Hitczenko(1994, Proposition 4.1) for details) that if 2q ≤ p, and (Zk) is any sequence of independent,symmetric random variables, then

‖∑

Zk‖p ≤ Kp

2q{‖∑

Zk‖2q + ‖max |Zk|‖p}≤ K p

2q{‖∑

Zk‖2q + (∑‖Zk‖pp)1/p

}.

12

Page 13: independent symmetric random variables Abstract

Applying this inequality to Zk = akYk, using Proposition 4.3, and the inequalities

‖∑

Zk‖2q ≤ ‖∑

Zk‖p and (∑‖Z‖pp)1/p ≤ ‖

∑Zk‖p,

we get the desired result, provided p/q ≤ C.

Remark 4.5. It is natural to ask whether the assumption (−1)l(lnφY )(2l)(0) ≥ 0 can bedropped. In general it cannot. To see this let (εk) be a Rademacher sequence, and forp ∈ N let ak = 1 or 0 according to whether k ≤ p or k > p. If b1 =

√p and bk = 0 for

k ≥ 2, then ‖a‖s ≤ ‖b‖s for 2 ≤ s ≤ p, but ‖∑akεk‖p ≈ p and ‖

∑bkεk‖p =

√p.

Before we proceed, we need some more notation. For a random variable Z we let Pois(Z) =∑Nk=1 Zk, where N,Z1, Z2, . . . , are independent random variables, N is a Poisson random

variable with parameter 1, and (Zk) are i.i.d. copies of Z. Moreover, if the Zk are in-dependent, then Pois(Zk) will always be chosen so that they are independent. SinceP (|Y | > t) ≤ (1/2)(1− e−1)P (|Pois(Y )| > t), the next proposition is a consequence of thecontraction principle.

Proposition 4.6. Let (Yk) be a sequence of independent symmetric random variables.Then

‖∑

Yk‖p ≤ C‖∑

Pois(Yk)‖p,for some absolute constant C.

Next we note that, for arbitrary random variable Z

φPois(Z) = exp{φZ − 1},

and it follows, in particular, that if Z is symmetric, then (−1)k(lnφPois(Z))(2k)(0) ≥ 0 forall k and thus, by the above discussion, if (Zk) are i.i.d. copies of Z, then

‖∑

akPois(Yk)‖p ≤ C‖∑

bkPois(Yk)‖p,

whenever ‖a‖s ≤ ‖b‖s, for 2 ≤ s ≤ p.Now fix a sequence a, and let m = d(‖a‖2/‖a‖p)2p/(p−2)e as in Theorem 4.1. Define a

sequence b by

bk ={β, if k ≤ m;0, otherwise,

where β = ‖a‖p/m1/p. Note that ‖b‖2 ≥ ‖a‖2, and ‖b‖p = ‖a‖p. Hence by Holder’sinequality ‖a‖s ≤ ‖b‖s for all 2 ≤ s ≤ p. Therefore, for an arbitrary sequence of i.i.d.symmetric random variables (Yk) ⊂ Lp we have

‖∑

akYk‖p ≤ K‖a‖pm1/p

∥∥ m∑k=1

Pois(Yk)∥∥p

= K‖a‖pm1/p

∥∥ Nm∑k=1

Yk∥∥p,

where Nm is a Poisson random variable with parameter m, independent of the sequence(Yk).

Our final step is to estimate the quantity ‖∑Nmk=1 Yk‖p. The following computation

is rather straightforward; a very similar calculation can be found, for example, in Klass(1976, proof of Proposition 1).

13

Page 14: independent symmetric random variables Abstract

Proposition 4.7. For m ∈ N, and (Yk) and Nm as above, we have

∥∥ Nm∑k=1

Yk∥∥p≤ K

∥∥m∨p∑k=1

Yk∥∥p.

Proof: For j ≥ 1, let Sj =∑jk=1 Yk. Note that, since (Sj/j) is a reversed martingale (or

by an application of the triangle inequality), ‖Sj/j‖p is decreasing in j. Therefore, for j0to be specified later, we have

‖Nm∑k=1

Yk‖pp =∞∑j=1

‖Sj‖ppe−mmj

j!≤ ‖Sj0‖pp

j0∑j=1

e−mmj

j!+‖Sj0‖ppjp0

∑j>j0

jpe−mmj

j!

≤‖Sj0‖pp

1 +e−m

jp0

∑j>j0

jpmj

j!

.

The ratio of two consecutive terms in the last sum is equal to

(j + 1j

)pm

j + 1≤ ep/jm

j + 1≤ 1

2,

whenever j ≥ max{2em, p}. Therefore, choosing j0 ≈ max{2em, p}, we obtain

‖Nm∑k=1

Yk‖pp ≤ ‖Sj0‖pp(

1 +2e−m

jp0jp0 (

m

j0)j0)≤ 2‖Sj0‖pp ≤ Kp‖Sm∨p‖pp.

This completes the proof of Proposition 4.7.

Theorem 4.1 now follows immediately from the above results.

14

Page 15: independent symmetric random variables Abstract

5. Distribution of a sum of i.i.d. random variables.In this section, Y1, . . . , Yn are independent copies of a symmetric random variable Y . Wefix n, and let S = Sn =

∑ni=1 Yi, and M = Mn = sup1≤i≤n Yi. Our aim is to calculate

‖S‖p, and as we will see below, this is equivalent to finding ‖S‖p,∞+ ‖M‖p. Let us recall,that for a random variable Z, ‖Z‖p,∞ is defined by

‖Z‖p,∞ = sups>0

s1/pZ∗(s),

where Z∗ denotes the decreasing rearrangement of the distribution function of Z i.e.

Z∗(s) = sup{t : P (|Z| > t) > s} 0 < s < 1.

The following simple observation will be useful

Lemma 5.1. Let p ≥ 2. For any sequence of independent, symmetric random variables(Zk) ⊂ Lp we have

14{∥∥∥∑Zk

∥∥∥p,∞

+ ‖max |Zk|‖p}≤∥∥∥∑Zk

∥∥∥p≤ K

{∥∥∥∑Zk

∥∥∥p,∞

+ ‖max |Zk|‖p},

for some absolute constant K.

Proof: Since for p > q ≥ 1 and for any random variable W , ‖W‖q,∞ ≤ ‖W‖q ≤(pp−q

)1/q

‖W‖p,∞, the first inequality is trivial. For the second inequality we first usea result of Kwapien and Woyczynski (see proof of Proposition 4.4 above)∥∥∥∑Zk

∥∥∥p≤ Kp

q

{∥∥∥∑Zk

∥∥∥q

+ ‖max |Zk|‖p},

and then the right estimate above with q = p/2. This completes the proof.

Throughout the rest of this section, by f(x) � g(x) we mean that f(cx) ≤ Cg(x) for someconstants c, C. We will write f(x) ≈ g(x) if f(x) � g(x) and g(x) � f(x). With thisconvention we have

Theorem 5.2. For 0 ≤ θ ≤ 1 we have

S∗(θ) � T1(θ) + T2(θ),

where

T1(θ) =log(1/θ)√θ−1/n − 1

∥∥∥Y ∗∣∣[θ/n∨(θ−1/n−1),1]

∥∥∥2,

and

T2(θ) = log(1/θ) supθ/n≤t≤θ−1/n−1

Y ∗(t)

log(

1 + θ−1/n−1t

) .15

Page 16: independent symmetric random variables Abstract

Proof: Let t > 0. Following the argument in Hahn and Klass (1995), for a > 0 we have

P (S ≥ t) ≤P (M ≥ a) + infλ>0

E(eλ(S−t)∣∣M < a)P (M < a)

=1− Pn(Y < a) +(

infλ>0

E(eλ(Y−t/n)∣∣Y < a)P (Y < a)

)n.

If we choose a so that these two terms are equal, and then we let

θ = 1− Pn(Y < a) =(

infλ>0

E(eλ(Y−t/n)∣∣Y < a)P (Y < a)

)n,

then, by the above computationP (S ≥ t) ≤ 2θ.

The whole idea now is to invert the above relationship, i.e. starting with θ we willfind a relatively small t so that S∗(θ) � t. Since the values of S∗(θ) are of no importancefor θ close to 1 we can assume that θ is bounded away from 1, θ ≤ 1/10, say. ThenP (Y < a) = (1 − θ)1/n ≈ 1 − θ/n, so that P (Y ≥ a) ≈ θ/n which, in turn, means thatY ∗(θ/n) ≈ a.

In order to find a t, we will use the relation

θ = infλE(eλ(Y−t/n)

∣∣Y < a)n

= infλ

(Eeλ(Y−t/n)

)n,

where Y denotes Y IY <a. Then, for λ > 0 we have

θ ≤ e−λt(EeλY

)n,

and taking logarithms on both sides we get

t ≤ 1λ

ln(1/θ) + n ln(EeλY

)1/λ

.

Hence, we can take

t = infλ>0

{ 1λ

ln(1/θ) + n ln(EeλY

)1/λ }.

Note that the second term, being equal to n ln∥∥∥eY ∥∥∥

λ, is an increasing function of λ.

Since the first term is decreasing in λ, it follows that, up to a factor of 2, infimum isattained when both terms are equal. Thus t ≈ (1/λ) ln(1/θ), where λ is chosen so thatln(1/θ) = n ln

(EeλY

), i.e. EeλY = θ−1/n. Since Y is symmetric, EeλY = E cosh(λ|Y |),

and the above equality is equivalent to

E

(cosh(λY )− 1θ−1/n − 1

)= 1.

16

Page 17: independent symmetric random variables Abstract

But this simply means that

t ≈∥∥∥Y ∥∥∥

Φln(1/θ), where Φ(s) =

cosh(s)− 1θ−1/n − 1

.

(Recall that if Ψ is an Orlicz function and Z is a random variable, then ‖Z‖Ψ = inf{u >0 : EΨ(Z/u) ≤ 1}.)

In order to complete the proof, it suffices to show that, with Φ as above, we have∥∥∥Y ∥∥∥Φ≈ supθ/n≤t≤θ−1/n−1

Y ∗(t)

log(

1 + θ−1/n−1t

) +1√

θ−1/n − 1

∥∥∥Y ∗∣∣[θ/n∨(θ−1/n−1),1]

∥∥∥2.

We will need the following two observations. For an Orlicz function Ψ and a randomvariable Z, define the weak Orlicz norm by

‖Z‖Ψ,∞ = sup0<x<1

{1

Ψ−1(1/x)Z∗(x)

}.

Lemma 5.3. Let ψ(s) = es − 1. Then,

‖Z‖ψ,∞ ≤ ‖Z‖ψ ≤ 3 ‖Z‖ψ,∞ .

Proof: Only the second inequality needs justification. Suppose ‖Z‖ψ,∞ ≤ 1. This meansthat Z∗(x) ≤ ln(1 + 1/x), for each 0 < x < 1. Hence

Eψ(Z/u) = Ee{Z∗/u}−1 ≤

∫ 1

0

e(1/u) ln(1+1/x)dx−1 ≤∫ 1

0

(2/x)1/udx−1 =21/uu

u− 1−1 ≤ 1,

whenever u ≥ 3.Remark 5.4. The above relationship is true for more general Orlicz functions, as long asthey do grow fast enough; see Theorem 4.6 in Montgomery-Smith (1992) for more details.

Lemma 5.5. Suppose that Φ1, Φ2 : [0,∞) → [0,∞) are nondecreasing continuous func-tions that map zero to zero, and such that there exists a constant c > 1 such thatΦi(x/c) ≤ Φi(x)/2 (i = 1, 2). Suppose further that there are numbers a,A > 0 suchthat Φ1(a) = Φ2(a) = A. Define Φ : [0,∞)→ [0,∞) by

Φ(x) ={

Φ1(x) if x ≤ aΦ2(x) if x > a .

Then for any measurable scalar valued function f , we have that

‖f‖Φ ≈ ‖f∗|[0,A−1]‖Φ2

+ ‖f∗|[A−1,∞)‖Φ1.

The constants of approximation depend only upon c.

Proof: Notice that Φ(x/c2) ≤ Φ(x)/2. This is clear if both x/c2 and x lie on the sameside of a, and otherwise it follows by considering the cases whether x/c is greater or lessthan a.

17

Page 18: independent symmetric random variables Abstract

Let us first suppose that the right hand side is less than 1. Then∫ A−1

0

Φ2(f∗(x)) dx ≤ 1 and∫ ∞A−1

Φ1(f∗(x)) dx ≤ 1.

¿From the first inequality, we see that A−1Φ2(f∗(A−1)) ≤ 1, from which it follows thatf∗(A−1) ≤ a. Let B be such that f∗(B+) ≤ a ≤ f∗(B−), so that B ≤ A. (Here, and inthe rest of the proof, we define f(A+) = limx↘A f(x), and f(A−) = limx↗A f(x).) Then∫ B

0

Φ(f∗(x)) dx =∫ B

0

Φ2(f∗(x)) dx ≤ 1,

and ∫ A−1

B

Φ(f∗(x)) dx ≤ A−1Φ(f∗(B+)) ≤ 1,

and ∫ ∞A−1

Φ(f∗(x)) dx =∫ ∞A−1

Φ1(f∗(x)) dx ≤ 1.

Hence ∫ ∞0

Φ(f∗(x)) dx ≤ 3,

and so ∫ ∞0

Φ(f∗(x)/c4) dx ≤ 1,

that is,‖f‖Φ ≤ c

4.

Now suppose that ‖f‖Φ ≤ 1, so that∫ ∞0

Φ(f∗(x)) dx ≤ 1.

Then A−1Φ(f∗(A−1)) ≤ 1, which implies that f∗(A−1) ≤ a. Let B be such that f∗(B+) ≤a ≤ f∗(B−), so that B ≤ A. Then∫ B

0

Φ2(f∗(x)) dx =∫ B

0

Φ(f∗(x)) dx ≤ 1,

and ∫ A−1

B

Φ2(f∗(x)) dx ≤ A−1Φ2(f∗(B+)) ≤ 1.

Hence ∫ A−1

0

Φ2(f∗(x)) dx ≤ 2,

18

Page 19: independent symmetric random variables Abstract

from which it follows that ‖f |[0,A−1]‖Φ2≤ c. Also

∫ ∞A−1

Φ1(f∗(x)) dx =∫ ∞A−1

Φ(f∗(x)) dx ≤ 1.

and hence ‖f |[A−1,∞)‖Φ1≤ 1. This completes the proof of Lemma 5.5

Now, in order to finish the proof of Theorem 5.2, let Φ(s) = cosh(s)−1θ−1/n−1

. Then,

Φ(s) ={

Φ1(s), if 0 ≤ s ≤ 1;Φ2(s), if s > 1;

where

Φ1(s) ≈ s2

θ−1/n − 1and Φ2(s) ≈ es − 1

θ−1/n − 1.

Since Y ∗ = (Y IY <a)∗ ≈ (Y ∗∣∣[θ/n,1]

)∗, by Lemma 5.5 we obtain

∥∥∥Y ∥∥∥Φ≈∥∥∥Y ∗∣∣[0,θ−1/θ−1]

∥∥∥Φ2

+∥∥∥Y ∗∣∣[θ−1/θ−1,1]

∥∥∥Φ1

≈∥∥∥Y ∗∣∣[0,θ−1/θ−1]

∥∥∥Φ2,∞

+1√

θ−1/θ − 1

∥∥∥Y ∗∣∣[θ−1/θ−1,1]

∥∥∥2

≈ supθ/n≤t≤θ−1/n−1

Y ∗(t)

log(

1 + θ−1/n−1t

) +1√

θ−1/n − 1

∥∥∥Y ∗∣∣[θ/n∨(θ−1/n−1),1]

∥∥∥2.

This completes the proof of Theorem 5.2.

19

Page 20: independent symmetric random variables Abstract

6. Moments of linear combination of symmetric Weibull random variables.

In this section we will apply our methods to obtain an upper bound for the Lp-norm of alinear combination of i.i.d. symmetric Weibull random variables with parameter 0 < r < 1.We will also show that this upper bound is tight. A random variable X will be calledsymmetric Weibull with parameter r, 0 < r < ∞ if X is symmetric and |X| has densitygiven by

f|X|(t) = rtr−1e−tr

, t > 0.

We refer to Johnson and Kotz (1970, Chapter 20) for more information on Weibull randomvariables. Here we only note that by change of variables

‖X‖pp = Γ(1 +p

r), p > 0,

so that, using Stirling formula and elementary estimates we have the following.

Lemma 6.1. If X is a symmetric Weibull random variable with parameter 0 < r < 1,and p ≥ 2, then

p‖X‖2 ≤ C‖X‖p,

where C is a constant not depending on p or r.

As we mentioned in the Introduction, Gluskin and Kwapien (1995) established a two sidedinequality for the Lp-norm of a linear combinations of i.i.d. symmetric random variableswith logarithmically concave tails. In the special case where P (|ξ| > t) = exp(−tr), r ≥ 1,their result reads as follows

c{

(∑k≤p

ar′

k )1/r′ ‖ξ‖p +√p ‖ξ‖2 (

∑k>p

a2k)1/2

}≤ ‖

∑akξk‖p

≤C{

(∑k≤p

ar′

k )1/r′ ‖ξ‖p +√p ‖ξ‖2 (

∑k>p

a2k)1/2

},

where r′ is the exponent conjugate to r, i.e. 1/r + 1/r′ = 1, and c and C are absoluteconstants. In this section we complement the result of Gluskin and Kwapien, by treatingthe case r < 1. Here is the main result of this section.

Theorem 6.2. There exist absolute constants c, C such that if (Xi) is a sequence of i.i.d.symmetric Weibull random variables with parameter r, where 0 < r < 1 and (ak) ∈ `2,then

cmax{√p‖a‖2‖X‖2, ‖a‖p‖X‖p} ≤ ‖∑

akXk‖p ≤ C max{√p‖a‖2‖X‖2, ‖a‖p‖X‖p}.

Proof: We begin with the following result.

20

Page 21: independent symmetric random variables Abstract

Proposition 6.3. Let p ≥ 2, and let (Xi) be a sequence i.i.d. symmetric Weibull randomvariables with parameter r where 0 < r < 1. Then, the following is true.

a1‖X‖p ≤ ‖∑k≤p

akXk‖p ≤ Ca1‖X‖p.

Proof: The first inequality is trivial. To prove the second, note that

‖∑k≤p

akXk‖p ≤ a1‖∑k≤p

Xk‖p ≤ a1‖∑k≤p

|Xk| ‖p.

Assume first that r is bounded away from 0, say r > 1/2. Then, letting (Γi) be a sequenceof i.i.d. exponential distributions with mean 1 we can write

E(∑k≤p

|Xk|)p =E

[∑k≤p

(|Xk|r)1/r]rp/r

= E

[∑k≤p

Γ1/rk

]rp/r

≤E

∑k≤p

Γk

p/r

≤ pΓ(p+ p/r)Γ(p)

≤ p (p(1 + 1/r))p

Γ(p)Γ(1 + p/r)

≤KpΓ(1 + p/r),

since r is bounded away from zero. This shows the upper bound for r > 1/2. To handlethe case r ≤ 1/2 we will apply a method that was used in Schechtman and Zinn (1990).Let (X(k)) denote the nonincreasing rearrangement of (|Xk|)k≤p. For a number t > 0 wehave

P (∑k≤p

|Xk| ≥ t) = P (∑k≤p

X(k) ≥ t) ≤∑k≤p

P (X(k) ≥ qkt),

where (qk) is any sequence of nonnegative numbers such that∑k≤p qk ≤ 1. Now, using

the inequality

P (Y(k) ≥ s) ≤(∑j≤p P (Yj ≥ s))k

k!,

valid for any sequence of independent random variables (Yk) (cf. e.g. Talagrand (1989,Lemma 9)) we get that

P (X(k) ≥ qkt) ≤(p exp{−qrktr})k

k!.

Therefore,

E(∑k≤p

|Xk|)p ≤∑k≤p

pk

k!· p∫ ∞

0

tp−1 exp{−kqrktr}dt.

Substituting u = (k1/rqkt)r into the kth term and integrating we get∫ ∞0

tp−1 exp{−(k1/rqkt)r}dt =Γ(p/r)

r(k1/rqk)p.

21

Page 22: independent symmetric random variables Abstract

Therefore,

E|∑k≤p

Xk|p ≤ (p/r)Γ(p/r){∑k≤p

pk+1

k!kp/rqpk

}≤ (p/r)Γ(p/r)

1(infk{k1/rqk})p

∞∑k=1

pk

k!,

which is bounded by Cp‖X‖pp, as long as k1/rqk is bounded away from zero. The choiceqk ≈ k−1/r concludes the proof of the Proposition.

Now for the general case. By the previous proposition we can and do assume that n > p.We will establish the upper bound first.

We first observe that it suffices to prove the result under the additional assumptionthat the first dpe entries in a coefficient sequence are equal. Indeed, suppose we know theresult in that case. Then, for the general sequence (ak), we can write,

‖∑

akXk‖p ≤‖∑k≤p

a1Xk +∑k≥1

akXdpe+k‖p

≤C max{√

p‖X‖2(√pa1 + ‖a‖2), (p1/pa1 + ‖a‖p)‖X‖p

}≤C max{√p‖a‖2‖X‖2, ‖a‖p‖X‖p},

since, by Lemma 6.1, we have

pa1‖X‖2 ≤ Ka1‖X‖p ≤ K‖a‖p‖X‖p.

Next we note that if the first dpe coefficients are equal, and m is defined as in Theorem4.1, then automatically

m ≈ (‖a‖2/‖a‖p)2p/(p−2) ≥ cp,

and the inequality of that theorem takes form

‖∑

akXk‖p ≤ K‖a‖pm1/p

∥∥ m∑k=1

Xk

∥∥p.

By definition m ≤ 2(‖a‖2 / ‖a‖p)2p/(p−2), so that

‖a‖pm1/p

√pm ≤

√2p ‖a‖2 .

Therefore, in order to complete the proof it suffices to show that

‖m∑k=1

Xk‖p ≤ K max{√

pm‖X‖2,m1/p‖X‖p}.

In other words, the problem has been reduced to the special case when all coefficients areequal.

22

Page 23: independent symmetric random variables Abstract

To prove the latter inequality we will use Theorem 5.2. Fix n ∈ N. Let S =∑ni=1Xi,

and M = sup1≤i≤nXi, that is, M∗(t) ≈ X∗(t/n). Our aim is to show

‖S‖p ≤ K{√

np ‖X‖2 + ‖M‖p}.

(Note that ‖M‖p ≤ n1/p ‖X‖p.) By Lemma 5.1, it suffices to estimate ‖S‖p,∞ + ‖M‖p.Let us recall that X∗(t) = (log(1/t))1/r.

¿From Theorem 5.2, we know that

S∗(x) � T1(x) + T2(x),

where

T1(x) =log(1/x)√x−1/n − 1

∥∥∥X∗∣∣[x/n∨(x−1/n−1),1]

∥∥∥2,

and

T2(x) = log(1/x) supx/n≤t≤x−1/n−1

X∗(t)

log(

1 + x−1/n−1t

) .Now, ‖S‖p,∞ ≈ sup0<x<1 x

1/pT1(x) + sup0<x<1 x1/pT2(x).

To get a handle on these quantities, we use the following approximation:

x−1/n − 1 ≈{

(1/n) log(1/x) if x ≥ e−nx−1/n if x ≤ e−n.

Now, we can see that if x ≤ e−n, then T1(x) = 0, and if x ≥ e−n, then

T1(x) ≤ c√n√

log(1/x) ‖X‖2 .

Hence, sup0<x<1 x1/pT1(x) ≤ c√np ‖X‖2.

As for T2, we use similar approximations, and we arrive at the following formula. Ifx ≥ e−n, then

T2(x) ≈ supx/n≤t≤(1/n) log(1/x)

log(1/x)X∗(t)1 + log(1/t)− log(n/ log(1/x))

,

and if x ≤ e−n, then

T2(x) ≈ supx/n≤t≤x1/n

log(1/x)X∗(t)log(1/t)

.

Now let us make the substitution X∗(t) = (log(1/t))1/r, where 0 < r < 1. If x ≤ e−n,then it is clear that the supremum that defines T2(x) is attained when t is as small aspossible, that is, t = x/n. But, since x ≤ e−n, it follows that log(n/x) ≈ log(1/x), andhence

T2(x) ≈ X∗(x/n) ≈M∗(x).

23

Page 24: independent symmetric random variables Abstract

Now consider the case x ≥ e−n. Then

T2(x) ≈ supx/n≤t≤(1/n) log(1/x)

log(1/x)H(log(1/t)),

where

H(u) =u1/r

1− log(n/ log(1/x)) + u.

Now, looking at the graph of H(u), we see that supremum of H(u) over an interval onwhich H(u) is positive is attained at one of the endpoints of the interval. Hence

T2(x) ≈ log(1/x) max{H(log(n/x)),H(log(n/ log(1/x)))}.

Now,

log(1/x)H(log(n/x)) =log(1/x)X∗(x/n)

1 + log(log(1/x)/x)≈ X∗(x/n) ≈M∗(x),

because 1 + log(log(1/x)/x) ≈ log(1/x). Also,

log(1/x)H(log(n/ log(1/x))) = log(1/x)(log(n/ log(1/x)))1/r.

Putting all this together, we have that

sup0<x<1

x1/pT2(x) ≈ ‖M‖p,∞ + supx≥e−n

F (log(1/x)),

whereF (u) = e−u/pu(log(n/u))1/r.

Thus, the proof will be complete if we can show that F (u) ≤ √np ‖X‖2 for all u ≤ n. To

this end, from Stirling’s formula, we have that ‖X‖2 = Γ(1 + 2/r)1/2 ≥ c−1(

2re

)1/r.Notice that

uF ′(u)F (u)

= −up

+ 1− 1r log(n/u)

.

This quantity is positive if u is small, it is negative if u = p or u = n, and it is decreasing.Hence, F (u) attains its supremum at u0, where F ′(u0) = 0, and u0 ≤ p. But then

F (u0) ≤ √npG(n/u0),

where

G(v) =(log v)1/r

√v

.

Simple calculus shows us that

G(v) ≤ G(e2/r) =(

2re

)1/r

≤ c ‖X‖2 ,

24

Page 25: independent symmetric random variables Abstract

which proves the right inequality in Theorem 6.2.

To prove the left inequality, notice that by the original proof of Rosenthal’s inequality(Rosenthal (1970)) we have

‖∑

akXk‖p ≥ (∑

apk)1/p‖X‖p,

so it suffices to show that

‖∑

akXk‖p ≥ c√p(∑

a2k)1/2‖X‖2.

Let us assume without loss of generality that p ≥ 3. Let δ =(∑

j≤pa2j )

1/2

√p , and define a

sequence d = (dk) by the formula

dk ={δ, if k ≤ p;ak, otherwise.

Then,‖∑

akXk‖p ≥ κ‖∑

dkXk‖p,

for some absolute constant κ > 0. Indeed, let C be a constant such that

‖∑k≤p

Xk‖p ≤ C‖X‖p.

Since δ ≤ a1 we have

‖∑

dkXk‖p ≤δ‖∑k≤p

Xk‖p + ‖∑k>p

akXk‖p ≤ Ca1‖X‖p + ‖∑k>p

akXk‖p

≤(C + 1)‖∑

akXk‖p,

so that one can take κ = 1/(C + 1). Let (εk) be a sequence of Rademacher randomvariables, independent of the sequence (Xk). Notice that max dj/‖d‖2 ≤ 1/

√p. Therefore,

using the minimality property of Rademacher functions (cf. Figiel, Hitczenko, Johnson,Schechtman and Zinn (1995, Theorem 1.1), or Pinelis (1994, Corollary 2.5)) (here we usep ≥ 3) and then Hitczenko and Kwapien (1994, Theorem 1) we get

‖∑

akXk‖p ≥ κ‖∑

dkXk‖p ≥ κ‖∑

dk‖Xk‖2εk‖p ≥ c√p‖d‖2‖X‖2 = c

√p‖a‖2‖X‖2.

The proof is completed.

Remark 6.4: For r = 1 our formula gives p‖a‖p +√p‖a‖2, while Gluskin and Kwapien

obtained p supk≤p ak +√p(∑k>p a

2k)1/2. Although these two quantities look different,

they are equivalent. Clearly p supk≤p ak +√p(∑k>p a

2k)1/2 ≤ p‖a‖p +

√p‖a‖2. To see

that opposite inequality holds with an absolute constant, notice that if a1 and∑k>p a

2k

are fixed, then p‖a‖p+√p‖a‖2 maximized if the first several ak’s are equal to a1, the next

one is between a1 and 0, and the rest are 0. In this case it is very easy to check that therequired inequality holds.

Theorem 6.2 implies the following result for tail probabilities.

25

Page 26: independent symmetric random variables Abstract

Corollary 6.5. Let a = (ak) ∈ `2, a 6= 0, and let S =∑akXk. Then

limt→∞

logt ln 1/P (|S| > t) = r.

Proof: Since P (|S| > t) ≥ 12P (a1|X1| > t) = 1

2 exp{−(t/a1)r}, we immediately get

lim supt→∞

logt ln 1/P (|S| > t) ≤ r.

To show the opposite inequality note that if s < r then

E exp{|S|s} = 1 +∞∑k=1

E|S|sk

k!<∞,

by Theorem 6.2. Hence P (|S| > t) ≤ exp{−ts}E exp{|S|s}, which implies

lim inft→∞

logt ln 1/P (|S| > t) ≥ s.

Since this is true for every s < r, the result follows.

Acknowledgments.We would like to thank S. Kwapien for his remarks and suggestions that led to discoveryof the proof of Theorem 2.2.

References

1. Figiel, T., Hitczenko, P., Johnson, W.B., Schechtman, G., Zinn, J. (1994) Extremalproperties of Rademacher functions with applications to Khintchine and Rosenthalinequalities, Trans. Amer. Math. Soc., to appear.

2. Figiel, T., Iwaniec, T., Pe lczynski, A. (1984) Computing norms and critical exponentsof some operators in Lp-spaces, Studia Math. 79, 227 – 274.

3. Gluskin, E.D., Kwapien, S. (1995) Tail and moment estimates for sums of independentrandom variables with logarithmically concave tails, Studia Math. 114, 303 - 309.

4. Haagerup, U. (1982) Best constants in the Khintchine’s inequality, Studia Math. 70,231 - 283.

5. Hahn, M.G., Klass, M.J. (1995) Approximation of partial sums of arbitrary i.i.d.random variables and the precision of the usual exponential bound, preprint.

6. Hitczenko, P. (1993) Domination inequality for martingale transforms of Rademachersequence, Israel J. Math. 84, 161 – 178.

7. Hitczenko, P. (1994) On a domination of sums of random variables by sums of condi-tionally independent ones, Ann. Probab. 22, 453 – 468.

8. Hitczenko, P., Kwapien, S. (1994) On the Rademacher series, Probability in BanachSpaces, Nine, Sandbjerg, Denmark, J. Hoffmann – Jørgensen, J. Kuelbs, M.B. Marcus,Eds. Birkhauser, Boston, 31 – 36.

9. Johnson, N.L., Kotz, S. (1970) Continuous Univariate Distributions, Houghton – Mif-flin, New York.

26

Page 27: independent symmetric random variables Abstract

10. Johnson, W.B., Schechtman, G., Zinn, J. (1983) Best constants in moment inequali-ties for linear combinations of independent and exchangeable random variables, Ann.Probab. 13, 234 - 253.

11. Klass, M. (1976) Precision bounds for for the relative error in the approximation ofE|Sn| and extensions, Ann. Probab. 8, 350 – 367.

12. Kwapien, S., Szulga, J. (1991) Hypercontraction methods in moment inequalities forseries of independent random variables in normed spaces, Ann. Probab. 19, 1 - 8.

13. Kwapien, S., Woyczynski, W.A. (1992) Random Series and Stochastic Integrals. Singleand Multiple, Birkhauser, Boston.

14. Ledoux, M., Talagrand, M. (1991) Probability in Banach Spaces. Springer, Berlin,Heidelberg.

15. Marshall, A.W., Olkin, I. (1979) Inequalities: Theory of Majorization and Its Appli-cations, Academic Press, New York.

16. Montgomery-Smith, S. J. (1990) The distribution of Rademacher sums. Proc. Amer.Math. Soc. 109, 517 - 522.

17. Montgomery-Smith, S. J. (1992) Comparison of Orlicz–Lorentz spaces, Studia Math.103, 161 – 189.

18. Pinelis, I. (1994) Extremal probabilistic problems and Hotelling’s T 2 test under asymmetry condition, Ann. Statist. 22, 357 – 368.

19. Pinelis, I. (1994) Optimum bounds for the distributions of martingales in Banachspaces, Ann. Probab. 22, 1679 – 1706.

20. Rosenthal, H. P. (1970) On the subspaces of Lp (p > 2) spanned by sequences ofindependent random variables. Israel J. Math. 8, 273 - 303.

21. Schechtman, G., Zinn, J. (1990) On the volume of the intersection of two Lnp balls,Proc. Amer. Math. Soc. 110, 217 – 224.

22. Talagrand, M. (1989) Isoperimetry and integrability of the sum of independent Banach– space valued random variables, Ann. Probab. 17, 1546 – 1570.

23. Utev, S.A. (1985) Extremal problems in moment inequalities, in: Limit Theorems inProbability Theory. Trudy Inst. Math., Novosibirsk, 1985, 56 - 75 (in Russian).

P. Hitczenko S. J. Montgomery-SmithDepartment of Mathematics, Department of MathematicsNorth Carolina State University University of Missouri – ColumbiaRaleigh, NC 27695–8205, USA Columbia, MO 65211e-mail: [email protected] e-mail: [email protected]

K. OleszkiewiczInstytut MatematykiUniwersytet Warszawskiul. Banacha 202-097 Warszawa, Polande-mail: [email protected]

27