A semiparametric density estimator based on elliptical distributions

June 19, 2017 | Autor: Eckhard Liebscher | Categoría: Statistics, Multivariate Analysis
Share Embed


Descripción

ARTICLE IN PRESS

Journal of Multivariate Analysis 92 (2005) 205–225

A semiparametric density estimator based on elliptical distributions Eckhard Liebscher Institute of Mathematics, Technical University Ilmenau, D-98684 Ilmenau/Thu¨r, Germany Received 3 December 2002

Abstract In the paper we study a semiparametric density estimation method based on the model of an elliptical distribution. The method considered here shows a way to overcome problems arising from the curse of dimensionality. The optimal rate of the uniform strong convergence of the estimator under consideration coincides with the optimal rate for the usual one-dimensional kernel density estimator except in a neighbourhood of the mean. Therefore the optimal rate does not depend on the dimension. Moreover, asymptotic normality of the estimator is proved. r 2003 Elsevier Inc. All rights reserved. AMS 2000 subject classifications: 62G07; 62H12 Keywords: Elliptical distributions; Kernel density estimator

1. Introduction It is well known that in high dimensions nonparametric kernel density estimators have a poor performance for small samples and a very slow optimal convergence rate (cf. [14,15]). This is one phenomenon of the so-called ‘‘curse of dimensionality’’. Thus there is a need for new methods of density estimation in order to overcome this problem. In this paper we choose a semiparametric approach which is based on elliptical densities. Our approach goes partially back to Stute and Werner [17], and to Cui and He [1]. The new idea of the estimator introduced in this paper is to transform the data before applying the nonparametric estimator in order to avoid 

Fax: +49-3677-693270. E-mail address: [email protected].

0047-259X/$ - see front matter r 2003 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2003.09.007

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

206

convergence problems in boundary regions. Moreover, in the two papers mentioned before, it is supposed that at least a part of the parameters of the elliptical distribution is known. By contrast all parameters are assumed to be unknown in this paper. The estimator established in this paper offers interesting applications in areas where density estimators are needed for high-dimensional data [6]. The discriminant analysis is one such potential field. It should be pointed out that fitting an elliptical distribution is a useful alternative to the multivariate normal distribution which is frequently used. Accounts of the parametric estimation theory of elliptical distributions may be found in [1]; [3, p. 206]; [8]. The density f of an elliptical distribution on Rd is given by f ðxÞ ¼ detðSÞ1=2 gððx  mÞT S1 ðx  mÞÞ ðxARd Þ;

ð1:1Þ

where mARd is the mean and g : Rþ -Rþ is a measurable function with Z gðxT xÞ dx ¼ 1; Rd

Rþ ¼ ½0; þNÞ: We restrict ourselves to the case dX2: Assume that g is chosen such that S is the covariance matrix of the distribution determined by (1.1). This additional condition on g ensures the identifiability of the parameters in the distribution model (cf. [8, Theorems 2.6.2 and 2.6.5]). The theory of elliptical distributions is presented in the monograph by Fang and Zhang [8] where further references are given (see also [7]). We combine the components of m and S ¼ ðsij Þi;j¼1yd in a parameter vector y ¼ ðm1 ; y; md ; s11 ; s12 ; y; sdd ÞT ARdðdþ3Þ=2 without repeating identical quantities (S is symmetric). The main idea for estimating f is to use nonparametric methods as well as parametric estimators. The estimation procedure works as follows: first an estimator for y is computed, then g is estimated in a nonparametric way and finally, the estimators for m and S are plugged in. It turns out that the optimal rate of uniform strong convergence of the estimator for f coincides with the optimal rate for the usual one-dimensional kernel density estimator. Therefore the optimal rate does not depend on the dimension. A further advantage of our estimator is that the methods of bandwidth selection known from one-dimensional density estimation theory apply (cf. [9,10]). The paper is organized as follows: The estimator is developed in Section 2. In Section 3 we provide a theorem about the asymptotic normality of the density estimator. Moreover, we give the rate of uniform strong convergence. The proofs are deferred to Section 4.

2. Estimators Let us consider a random vector X having the density given by (1.1). It is well d

known that Z ¼ S1=2 ðX  mÞ has a spherical distribution. Moreover, Z ¼ RuðdÞ ; where uðdÞ is uniformly distributed on the unit sphere of Rd ; R is a random variable

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

207

d

taking values in Rþ ; and R; uðdÞ are independent. Y1 ¼ Y2 means that Y1 and Y2 have d

the same distribution. Let Y ¼ ðX  mÞT S1 ðX  mÞ: Now Y ¼ R2 ; which has the density fY ðyÞ ¼ sd yd=21 gðyÞ ðyARþ Þ;

sd ¼

pd=2 Gðd=2Þ

ð2:1Þ

(cf. [8, Theorem 2.5.5 and Corollaries 1, 2, p. 65]). To estimate f ; we need some estimator for g: At first glance one could have two ideas: *

*

d=2þ1 ˆ ˆ fY ðyÞ is an estimator for Idea 1: If fˆY is an estimator for fY ; then gðyÞ ¼ s1 d y ˆ ˆ g: But then gðyÞ-N as y-0 if fY ðyÞ is bounded away from 0 in a neighbourhood of 0: Idea 2: We consider Y˜ ¼ Y d=2 instead of Y : Let fˆY˜ ðyÞ be an estimator for the ˜ Then density fY˜ ðyÞY:

fY˜ ðyÞ ¼

2 sd gðy2=d Þ and d

ˇ gðyÞ ¼

d 1 ˆ d=2 s f ˜ ðy Þ: 2 d Y

This estimator gˇ for g behaves well near zero. But the estimator gˇ has the disadvantage that it becomes wiggly for large values of y since the data points are stretched by the power with exponent d=2 if d42: Now the final idea is to combine the advantages of the two estimators above and to introduce a rather general type of estimators. For this purpose, let c : Rþ -Rþ be a function having a derivative c0 with c0 ðyÞ40 for yX0; and the property cð0Þ ¼ 0: Then the density h of Y˜ ¼ cðY Þ is given by hðtÞ ¼ C0 ðtÞfY ðCðtÞÞ ¼ sd C0 ðtÞCðtÞd=21 gðCðtÞÞ; C is the inverse function of c: Further d=2þ1 0 gðxÞ ¼ s1 c ðxÞ hðcðxÞÞ: d x

ð2:2Þ

This formula shows how to compute g from h: We will see that h can be estimated nonparametrically. Then we obtain an estimator for g by applying (2.2). The function c should be chosen such that the disadvantages described above are avoided. If limx-0þ0 xd=2þ1 c0 ðxÞ is a positive constant and c0 is bounded, then we can expect a good behaviour of an estimator for g in a neighbourhood of 0. limx-N cðxÞ=x ¼ const ensures good properties of fˆn for large values of the argument. The precise conditions on c are given in the next section. Now we turn to establish the specific estimator for f : Let X1 ; y; Xn be a sample of d R -valued random vectors having an elliptical distribution according to (1.1). Suppose that g is bounded. Let m# n and S# n be estimators for m and S; respectively. Then y# n ¼ ðm# 1 ; y; m# d ; s# 11 ; s# 12 ; y; s# dd ÞT ARdðdþ3Þ=2 is an estimator for y: Suppose

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

208

that y# n fulfils the following property: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n jjy# n  yjj ¼ C0 lim sup ln lnðnÞ n-N

ð2:3Þ

a:s:

with a finite nonrandom constant C0 40: For example, y# n arising from sample mean and sample covariance matrix satisfies this condition in view of Strassen’s law of the iterated logarithm. Another appropriate choice for m# n and S# n could be the robust M-estimators (cf. [11]). The problem of efficient semiparametric estimation is examined in the monograph by Bickel et al. [2]. This monograph also provides a comprehensive overview of literature on semiparametric estimation. ˜ Let The next step is to establish a nonparametric estimator for the density h of Y: T # 1 Yni ¼ cððXi  m# n Þ Sn ðXi  m# n ÞÞ; i ¼ 1; y; n: Using the transformed sample Yn1 ; y; Ynn ; we define the kernel estimate for h: hˆn ðyÞ ¼

n 1 X ðKððy  Yin ÞBðnÞ1 Þ þ Kððy þ Yin ÞBðnÞ1 ÞÞ ðyARþ Þ ð2:4Þ nBðnÞ i¼1

with a random bandwidth BðnÞ and a kernel function K: The additional term Kððy þ Yin ÞBðnÞ1 Þ is inserted in order to avoid boundary effects in the neighbourhood of zero and according to the idea of reflection methods. The reader interested in reflection methods is referred to [4,18]. Using the estimator hˆn from (2.4), we get the estimator for f as follows: d=2þ1 0 gˆ n ðzÞ ¼ s1 c ðzÞ hˆn ðcðzÞÞ ðzARþ Þ; d z

# 1 ðx  m# n ÞÞ # n Þ1=2 gˆ n ððx  m# n ÞT S fˆn ðxÞ ¼ detðS n

ðxARd Þ:

ð2:5Þ

The asymptotic properties of fˆn ðxÞ are studied in the next sections. An other transformation-based estimator for a density is considered in El Barmi and Simonoff (2000). Figs. 1 and 2 below show an example of estimators for g: The data were taken from the UCI Machine Learning Repository (Dataset ‘‘breast cancer’’—new diagnostic database—variables 3,8,16,29). Obviously, there is a significant difference between the estimated function g and the function g arising from multivariate normal distribution.

3. Asymptotic properties of the density estimators Prior to formulating the main results of the paper, we provide the assumptions on K and c of the estimator fˆn ðxÞ defined in Section 2 by (2.4) and (2.5). Here p is some positive even integer.

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

209

Fig. 1. Kernel estimator gˆ n —group 1.

Fig. 2. Kernel estimator gˆ n —group 2.

Condition KðpÞ. The kernel function K : R-R has a Lipschitz continuous derivative on R and vanishes outside the interval ½1; 1 : Moreover, Z 1 Z 1 KðtÞ dt ¼ 1; tk KðtÞ dt ¼ 0 for k ¼ 1; y; p  1: 1

1

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

210

Condition TðpÞ. The ðp þ 1Þth order derivative of C exists and is continuous on ð0; NÞ; C is the inverse function of c; c0 is positive and bounded on ð0; þNÞ; and c00 is bounded on ð0; þNÞ: The function x/xd=21 c0 ðxÞ1 has a bounded derivative on ½0; M1 with some M1 40: Moreover, lim xd=2þ1 c0 ðxÞ ¼ C1 40:

ð3:1Þ

xk0

There are constants aAð0; 1 ; C2 ; M2 40 such that jCðtÞjpC2 jtja

for tA½0; M2 :

Example (For c). cðxÞ ¼ a þ ðad=2 þ xd=2 Þ2=d

ð3:2Þ

with a constant a40: Then lim xd=2þ1 c0 ðxÞ ¼ a1d=2 ; xk0

lim

x-N

cðxÞ ¼1 x

CðtÞ ¼ ððt þ aÞd=2  ad=2 Þ2=d ¼ a12=d

and

d 2=d þoðtÞ 2t

as tk0:

Hence Condition TðpÞ is satisfied with a ¼ 2=d: The random bandwidth BðnÞ is assumed to fulfil the conditions ð3:3Þ

C3 bðnÞpBðnÞpC4 bðnÞ; lim bðnÞ ln ln n ¼ 0

n-N

and bðnÞXC5 n1=5 :

ð3:4Þ

where C3 ; C4 ; C5 40 are constants and fbðnÞgn¼1;2y is a sequence of positive real numbers. Now the theorem about rates of uniform strong convergence of fˆn defined in (2.5) reads as follows: Theorem 3.1. Suppose that the pth order derivative gðpÞ of g exists and is bounded on Rþ for some even integer pX2: Let conditions KðpÞ; TðpÞ; (2.3), (3.3), (3.4) and EjjZjjt o þ N be satisfied for some t44: Then, for any compact set D with meD; pffiffiffiffiffiffiffiffi  a:s: ð3:5Þ sup j fˆn ðxÞ  f ðxÞj ¼ O ln nðnbðnÞÞ1=2 þ bp ðnÞ xAD

For any compact set D with mAD; we still have pffiffiffiffiffiffiffiffi  sup j fˆn ðxÞ  f ðxÞj ¼ O ln nðnbðnÞÞ1=2 þ bg ðnÞ

a:s:

xAD

with g ¼ minfa; a þ 1  ad=2g; a from Condition TðpÞ: This theorem improves the rates given in [5]. For the function c determined by (3.2), g is equal to d2: Putting bðnÞ ¼ constðn=lnðnÞÞp=ð2pþ1Þ ; we obtain the

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

211

optimized (w.r.t. the bandwidth) convergence rate supxAD j fˆn ðxÞ  f ðxÞj ¼ Oððn=lnðnÞÞp=ð2pþ1Þ Þ of (3.5). This rate is the optimal one known from onedimensional kernel density estimation. The strong convergence rate of fˆn for arguments away from m does not depend on the dimension. The reason for the slow convergence rate of fˆn for arguments close to m is the following: In many cases we have limtk0 h0 ðtÞ ¼ þN or N: This holds for example if limxk0 xd=2þ1 g0 ðxÞ ¼ þN or N and the derivative of the function C0 Cd=21 is bounded in a neighbourhood of zero. The following theorem states the asymptotic normality of the estimator fˆn : Here P

Vn ¼ oP ðan Þ means that a1 n Vn ! 0; fVn g and fan g are sequences of random variables and positive real numbers, respectively. Theorem 3.2. Let the assumptions of Theorem 3.1 be satisfied and bðnÞ ¼ C6 n1=ð2pþ1Þ with a constant C6 40: Moreover, assume that pffiffiffiffiffiffiffiffiffiffiffiffiffi  jBðnÞbðnÞ1  1j ¼ oP ln ln n n1=2 : ð3:6Þ Then for any xARd ; xam such that gðpÞ is continuous at u :¼ ðx  mÞT S1 ðx  mÞ; pffiffiffiffiffiffiffiffiffiffiffiffi D nBðnÞð fˆn ðxÞ  f ðxÞÞ ! Nðm; % s% 2 Þ a:s: where 2

s% ¼ m% ¼

d=2þ1 0 detðSÞ1 s1 c ðuÞgðuÞ d u

Z

1

K 2 ðtÞ dt

1

ð2pþ1Þ=2 d=2þ1 0 detðSÞ1=2 s1 c ðuÞC6 d u

1 ðpÞ h ðcðuÞÞ p!

Z

1

tp KðtÞ dt: 1

A similar theorem was proved by Stute and Werner [17] in the case of known m and cðxÞ  x: From Theorem 3.2 one may construct confidence regions for f ; for example. But when doing so, one needs estimators for m% and s% 2 which in turn # n should be used requires an appropriate estimator for hðpÞ ðcðuÞÞ: Moreover, m# n and S instead of m and S: Bandwidth selection methods satisfying (3.6) can be found in [9].

4. Proofs Assume that (2.3), (3.3) and (3.4) are satisfied, and g0 exists and is bounded on finite subintervals of Rþ : Thus there is some n0 such that 3C4 bðnÞo1 for nXn0 : Further on let nXn0 : First we provide two lemmas which are used at several places below.

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

212

Lemma 4.1. Under Condition Tð2Þ; the functions t/C0 ðtÞCðtÞd=21 and h are bounded on bounded subsets of Rþ : Proof. Condition (3.1) implies that lim C0 ðtÞCðtÞd=21 ¼ C11 : tk0

Since c0 ðtÞ40 for t40; the function t/C0 ðtÞ CðtÞd=21 and hence h are bounded on any interval ½m1 ; m2 ; m1 X0: & Lemma 4.2. Under Condition Tð2Þ; sup

% t;vA½0;M

jhðtÞ  hðvÞj jt  vjg o þ N

% for any M40; where g ¼ minfa; a þ 1  ad=2g: Proof. Observe that by the Lipschitz continuity of g; jhðtÞ  hðvÞjpC7 jCðtÞ  CðvÞj % By Condition Tð2Þ; uniformly for t; vA½0; M : jCðtÞ  CðvÞjpC8 jt  vjg % C7 ; C8 40 are constants. This completes the proof. uniformly for t; vA½0; M :

&

Let us introduce some notations Kb ðy; tÞ ¼ Kððy  tÞ=bÞ þ Kððy þ tÞ=bÞ

for y; tX0;

# 1 ðXi  m# n Þ; Y% in ¼ ðXi  m# n Þ S n T

Y% i ¼ ðXi  mÞT S1 ðXi  mÞ;

Y˜ i ¼ cðY% i Þ

for i ¼ 1; y; n;

and n 1 X Kb ðy; Y˜ i Þ; h˜n ðy; bÞ ¼ nb i¼1

n 1 X hˆn ðy; bÞ ¼ Kb ðy; Yin Þ nb i¼1

ðyARþ Þ:

Note that Yin ¼ cðY% ni Þ for i ¼ 1; y; n: Obviously, each Y˜ i has the density h such that h˜n ð:; bÞ is the usual density estimator for h with some boundary adjustment. In the first part of this section we prove strong convergence rates for hˆn and later for fˆn : Let bn ¼ C3 bðnÞ; b%n ¼ C4 bðnÞ: The compact set ½m; M  ½bn ; b%n with arbitrary m % ; y; U 2 having sides and %M; 0pmoM can be covered with closed rectangles U 1 n of length ðM  mÞn1 ; ðb%n  bn Þn1 and centres ðu1 ; b1 Þ; y; ðun2 ; bn2 Þ such that %

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

Sn

i¼1

213

Ui ¼ ½m; M  ½bn ; b%n : m and M will be determined later. Observe that % sup jhˆn ðyÞ  hðyÞj yA½m;M

p sup

sup

yA½m;M bA½bn ;b%n

%

p max

k¼1;y;n2

jhˆn ðy; bÞ  hðyÞj jhˆn ðy; bÞ  hˆn ðuk ; bk Þj þ jhˆn ðuk ; bk Þ  h˜n ðuk ; bk Þj

sup ðy;bÞAUk

þ jh˜n ðuk ; bk Þ  hðuk Þj þ

! jhðuk Þ  hðyÞj :

sup

ð4:1Þ

y : ðy;bk ÞAUk

Lemma 4.3. Assume that Condition Tð2Þ is satisfied. Then

max

sup

k¼1;y;n2 y : ðy;bk ÞAUk

jhðuk Þ  hðyÞj ¼

Oðn1 Þ

if ma0;

Oðng Þ

if m ¼ 0;

g as above:

Proof. By construction of the sets Uk ; the assertion follows from Lemma 4.2.

&

Lemma 4.4. Assume that the pth order derivative hðpÞ of h exists for some even integer pX2 and is bounded on every interval ½m1 ; m2 with m1 40: Moreover, let the Conditions KðpÞ and TðpÞ be fulfilled. Then max

k¼1;y;n2

pffiffiffiffiffiffiffiffi  jh˜n ðuk ; bk Þ  hðuk Þj ¼ O ln nðnbðnÞÞ1=2 þ bn

a:s:

where bn ¼ bp ðnÞ if m40 and bn ¼ bg ðnÞ if m ¼ 0; g as above. Proof. By standard arguments, one can show that pffiffiffiffiffiffiffiffi  max jh˜n ðuk ; bk Þ  Eh˜n ðuk ; bk Þj ¼ O ln nðnbðnÞÞ1=2 k¼1;y;n2

a:s:

ð4:2Þ

(cf. [16, Theorem 1.2]). In the case m40; we obtain jEh˜n ðuk ; bk Þ  hðuk Þj   Z N    ¼ Oðbp ðnÞÞ ¼ max b1 Kððu  tÞ=b ÞhðtÞ dt  hðyÞ k k k 

max

k¼1;y;n2

k¼1;y;n2

0

ð4:3Þ

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

214

by Taylor expansion for n large enough. Eqs. (4.2) and (4.3) imply Lemma 4.4 in the case m40: By Lemma 4.2, sup jEh˜n ðy; bÞ  hðyÞj

sup

bA½bn ;b%n yA½0;M

%

¼ sup

 Z  sup b1

p sup

Z  sup 

bA½bn ;b%n yA½0;M

%

bA½bn ;b%n yA½2b;M

% Z  y=b

 þ sup  yA½0;2bÞ

1

0

1

1

N

  ðKððy  tÞ=bÞ þ Kððy þ tÞ=bÞÞðhðtÞ  hðyÞÞ dt

  KðtÞðhðy  tbÞ  hðyÞÞ dt

!   ðKðtÞ þ Kðt þ 2y=bÞÞðhðy  tbÞ  hðyÞÞ dt 

g

¼ Oðb ðnÞÞ: Eq. (4.2) completes the proof of Lemma 4.4 in the case m ¼ 0: & Lemma 4.5. Assume that Conditions Kð1Þ and Tð2Þ are satisfied and EjjZjjt o þN for some t44: Then max

k¼1;y;n2

jhˆn ðuk ; bk Þ  h˜n ðuk ; bk Þj ¼ oððnbðnÞÞ1=2 Þ a:s:

For the proof of this lemma, a series of further lemmas is needed. Using the Lipschitz continuity of K 0 ; we obtain max

k¼1;y;n2

jhˆn ðuk ; bk Þ  h˜n ðuk ; bk ÞjpB1n þ Oðn1 bðnÞ2 ÞðB2n þ B3n Þ

where Gb ðy; tÞ ¼ K 0 ððy  tÞ=bÞ  K 0 ððy þ tÞ=bÞ;     n   1 2 X % 0 B1n ¼ max n bk ðYin  Y% i ÞGbk ðuk ; Y˜ i Þc ðY% i Þ;  k¼1;y;n2  i¼1 B2n ¼ max

k¼1;y;n2

B3n ¼ max

k¼1;y;n2

n X

ððY% i  Y% in Þ2 þ jY% i  Y% in jIðjcðY% in Þ  Y˜ i j4bk ÞÞ;

i¼1

b1 k

n X

ðY% i  Y% in Þ2

i¼1

 ðIðjuk  Y˜ i jp2bk Þ þ Iðjuk þ Y˜ i jp2bk ÞÞc0 ðY% i Þ: Let Zi ¼ S1=2 ðXi  mÞ such that Y% i ¼ ZiT Zi and Y% in  Y% i ¼ ZiT Dn Zi þ 2m* Tn Zi þ bn ;

ð4:4Þ

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

215

1=2 # 1 ðm  m# n Þ; m* T :¼ ðm  m# n ÞT S # 1 S1=2 : where Dn :¼ S1=2 S# 1  I; bn :¼ ðm  m# n ÞT S n S n n n By virtue of (2.3), we obtain that

 pffiffiffiffiffiffiffiffiffiffiffiffiffi Dn ¼ O n1=2 ln ln n ;

 pffiffiffiffiffiffiffiffiffiffiffiffiffi  m* n ¼ O n1=2 ln ln n ;

  bn ¼ O n1 ln ln n

a:s:

and jY% in  Y% i jpkn ðjjZi jj2 þ 1Þ;

 pffiffiffiffiffiffiffiffiffiffiffiffiffi kn ¼ O n1=2 ln ln n

a:s:

ð4:5Þ

ði ¼ 1; y; nÞ: fkn g is a sequence of positive real numbers not depending on i: Moreover,   pffiffiffiffiffiffiffiffiffiffiffiffiffi !  d n X X X ln ln n  0 d B1n p O 3=2 2 Zij Zil Gbk ðuk ; Y˜ i Þc ðY% i Þ max   n b ðnÞ k¼1;y;n2 j;l¼1 d¼0;1 i¼1      d X n X ln ln n   þ O 2 2 Gbk ðuk ; Y˜ i Þc0 ðY% i Þ; max   n b ðnÞ k¼1;y;n2 j;l¼1 i¼1

ð4:6Þ

where Zi ¼ ðZi1 ; y; Zid ÞT : In the sequel we derive the convergence rates of B1n to B3n : For this purpose, we next need the following auxiliary statement. Lemma 4.6. Assume that Condition Tð2Þ is satisfied. Let K : R-R be a bounded function with KðtÞ ¼ 0 for t: jtj41: Then, for j; l ¼ 1; y; d and d; k ¼ 0; 1;    X n pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   nbðnÞlnðnÞ max  ðUnijlk  EUnijlk Þ ¼ O  k¼1;y;n2  i¼1

a:s:

with Unijlk :¼ Zijd Zilk Kððuk  Y˜ i Þ=bk Þc0 ðY% i Þ; and    X n pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   nbðnÞlnðnÞ max  ðU% nijlk  EU% nijlk Þ ¼ O  k¼1;y;n2  i¼1

a:s:

with U% nijlk :¼ Zijd Zilk Kððuk þ Y˜ i Þ=bk Þc0 ðY% i Þ: Proof. We only prove the first assertion since the proof of the second assertion % such that ½0; M *Cð½0; % proceeds similarly. Choose M M þ 1 Þ: Hence Unijlk ¼ 0 for % since then Y˜ i 4M þ 1 and uk pM: By Lemma 4.1, o: Y% i ðoÞ4M % ðdþkÞ=2 sup c0 ðtÞ sup jKðtÞj jUnijlk jpM % tA½0;M

tA½1;1

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

216

and max

i;k : 1pipn; 1pkpn2

D2 Unijlk p max

k¼1;y;n2

2d 2k 2 EZ1j Z1l K ððuk  Y˜ i Þ=bk Þ c0 ðY% 1 Þ2

% dþk sup c0 ðtÞ2 pM % tA½0;M

p const  max

k¼1;y;n2

Z

max

k¼1;y;n2

EK 2 ððuk  Y˜ 1 Þ=bk Þ

uk þbk

hðtÞ dt maxfuk bk ;0g

¼ OðbðnÞÞ for j; l ¼ 1; y; d: D2 is the symbol for the variance. Let an :¼ the Bernstein inequality (cf. [13, p. 112]), we get   ( )  X n   P max  ðUnijlk  EUnijlk Þ4ean  k¼1;y;n2  i¼1  ( ) 2  X n n X   P  ðUnijlk  EUnijlk Þ4ean p   i¼1 k¼1

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi nbðnÞlnðnÞ: Applying

pC9 n2 expfC10 e2 a2n ðnbðnÞ þ ean Þ1 g pC11 expf2 lnðnÞ  C12 e2 lnðnÞð1 þ eÞ1 g for e41: C9 to C12 are positive constants not depending on n; j; l or e: Hence   ( )  X N n X   P max  ðUnijlk  EUnijlk Þ4ean o þ N  k¼1;y;n2  n¼1 i¼1 and the lemma follows by virtue of the Borel–Cantelli lemma. & Throughout the remainder of this section, we suppose that Conditions Kð1Þ and Tð2Þ are satisfied. Lemma 4.7. We have (a) EZ1j Gb ðy; Y˜ 1 Þc0 ðY% 1 Þ ¼ 0

for j ¼ 1; y; d; b40;

(b) sup

sup

jEZ1j Z1l Gb ðy; Y˜ 1 Þc0 ðY% 1 Þj ¼ Oðb2 ðnÞÞ for j; l ¼ 1; y; d

sup

sup

jEGb ðy; Y˜ 1 Þc0 ðY% 1 Þj ¼ OðbðnÞÞ:

yA½0;M bA½bn ;b%n

%

and (c) yA½0;M bA½bn ;b%n

%

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

217

Proof. Obviously, Z1 has a spherical distribution. Let R1 ¼ jjZ1 jj: Now R1 and 2 R1 1 Z1 are independent random variables (cf. [8, p. 57]) and R1 has the density given by (2.1). Moreover, ER1 1 Z1 ¼ 0: (a) Hence 0 2 2 EZ1j Gb ðy; Y˜ 1 Þc0 ðY% 1 Þ ¼ ER1 1 Z1j ER1 Gb ðy; cðR1 ÞÞc ðR1 Þ ¼ 0

for j ¼ 1; y; n; yA½0; M which is the first assertion of the lemma. 2 % (b) Let gðtÞ ˜ ¼ sd td=2 gðtÞ: Since jR1 1 Z1j jp1 for j ¼ 1; y; d and Y1 ¼ R1 ; we obtain the following inequalities and identities by partial integration and by Lemma 4.1: sup

jEZ1j Z1l Gb ðy; Y˜ 1 Þc0 ðY% 1 Þj

sup

yA½0;M bA½bn ;b%n

%

p sup

sup

jER21 Gb ðy; cðR21 ÞÞc0 ðR21 Þj

yA½0;M bA½bn ;b%n

% Z

  N 0  0 0  ¼ sup sup  ðK ððy  cðtÞÞ=bÞ  K ððy þ cðtÞÞ=bÞÞc ðtÞgðtÞ ˜ dt yA½0;M bA½bn ;b%n 0 %  Z    1   0 0 1 ¼ sup sup b ðK ðvÞ  K ðv þ 2yb ÞÞgðCðy ˜ þ vbÞÞ dv  yA½0;M bA½bn ;b%n  y=b %  Z  1  ¼ sup sup b2 ðKðvÞ þ Kðv þ 2yb1 ÞÞg˜ 0 ðCðy þ vbÞÞ yA½0;M bA½bn ;b%n  y=b %    0  C ðy þ vbÞ dv  Z 1 pOðb2 ðnÞÞ sup sup ðjKðvÞj þ jKðv þ 2yb1 ÞjÞ yA½0;M bA½bn ;b%n

%

y=b

 jC0 ðy þ vbÞCðy þ vbÞd=21 j dv ¼ Oðb2 ðnÞÞ: Hence the proof of part (b) is complete. (c) Analogously to part (b), sup

sup

yA½0;M bA½bn ;b%n

%

jEGb ðy; Y˜ 1 Þc0 ðY% 1 Þj

pOðbðnÞÞ

sup yA½0;M

 Z   1   d=21 0 0 1 sup  ðK ðvÞ  K ðv þ 2yb ÞÞCðy þ vbÞ gðCðy þ vbÞÞ dv   bA½b ;b%n y=b

¼ OðbðnÞÞ

%

n

which is assertion (c).

&

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

218

Lemma 4.8. B1n ¼ oðn1=2 bðnÞ1=2 Þ

a:s:

Proof. An application of Lemmas 4.6 and 4.7 leads to    X n pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    0 d nbðnÞlnðnÞ þ nb2 ðnÞ max  Zij Zil Gbk ðuk ; Y˜ i Þc ðY% i Þ ¼ O  k¼1;y;n2  i¼1    X n pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    nbðnÞlnðnÞ þ nbðnÞ a:s: Gbk ðuk ; Y˜ i Þc0 ðY% i Þ ¼ O max   k¼1;y;n2  i¼1

a:s:;

for d ¼ 0; 1; j; l ¼ 1; y; d: Hence, by (4.6), we obtain the lemma. & Lemma 4.9. Suppose that EjjZjjt o þ N for some t44: Then pffiffiffi B2n ¼ oð nbðnÞ3=2 Þ a:s: Proof. By the law of large numbers and (4.5), we obtain n X

ðY% i  Y% in Þ pOðln ln nÞ n 2

1

i¼1

n X

! 4

jjZi jj þ 1

¼ Oðln ln nÞ a:s:

i¼1

and by the Lipschitz continuity of c; max

k¼1;y;n2

n X

jY% i  Y% in jIðjcðY% in Þ  cðY% i Þj4bk Þ

i¼1

pOðbðnÞt=2þ1 Þ ¼ OðbðnÞt=2þ1 Þ

n X

jY% i  Y% in j jcðY% in Þ  cðY% i Þjt=21

i¼1 n X

jY% i  Y% in jt=2

i¼1 n X pffiffiffi ¼ nbðnÞ3=2 OðbðnÞt=21=2 ðln ln nÞt=4 nt=4þ1=2 Þ n1 jjZi jjt þ 1 i¼1

pffiffiffi  pffiffiffi ¼ nbðnÞ3=2 Oððln ln nÞt=4 n3ðt4Þ=20 Þ ¼ o nbðnÞ3=2 which implies the lemma.

&

Lemma 4.10. pffiffiffi B3n ¼ oð nbðnÞ3=2 Þ

a:s:

a:s:

!

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

219

Proof. By (4.5), we deduce B3n p Oðn1 ln ln nÞ max

k¼1;y;n2

b1 k

n X

0 % % ðY% 2i þ 1ÞIðY% i A½0; M Þc ðY i Þ

i¼1

 ðIðjuk  Y˜ i jp2bk Þ þ Iðjuk þ Y˜ i jp2bk ÞÞ p Oðn1 ln ln nÞ 

max

k¼1;y;n2

b1 k

n X

ðIðjuk  Y˜ i jp2bk Þ þ Iðjuk þ Y˜ i jp2bk ÞÞc0 ðY% i Þ

i¼1

% as in Lemma 4.6). Observe that by Lemma 4.1, (M Z 1 ˜ EIðju  Y jp2b Þp max b max b1 k i k k k k¼1;y;n2

k¼1;y;n2

uk þ2bk

hðvÞ dvpconst

maxfuk 2bk ;0g

and max

k¼1;y;n2

˜ b1 k EIðjuk þ Yi jp2bk Þpconst:

Applying Lemma 4.6, we obtain   pffiffiffiffiffiffiffiffiffiffiffiffiffi B3n ¼ Oðn1 ln ln nÞ n þ nlnðnÞbðnÞ1=2 ¼ Oðln ln nÞ which is Lemma 4.10.

a:s:

&

Proof of Lemma 4.5. Combine Lemmas 4.8–4.10 and (4.4) to get Lemma 4.5.

&

Lemma 4.11. jhˆn ðy; bÞ  hˆn ðuk ; bk Þj ¼ Oðn1 bðnÞ2 Þ a:s:

max

sup

max

sup ; jh˜n ðy; bÞ  h˜n ðuk ; bk Þj ¼ Oðn1 bðnÞ2 Þ

k¼1;y;n2 ðy;bÞAUk

and k¼1;y;n2 ðy;bÞAUk

a:s:

Proof. Observe that the Lipschitz continuity of K implies jKb ðy; tÞ  Kb ðw; tÞj pjKððy  cðtÞÞ=bÞ  Kððw  cðtÞÞ=bÞj þ jKððy þ cðtÞÞ=bÞ  Kððw þ cðtÞÞ=bÞj

pC13 jy  wjb1 and

for y; wARþ ; bA½bn ; b%n

%

jKb ðy; tÞ  KB ðy; tÞj pjKððy  cðtÞÞ=bÞ  Kððy  cðtÞÞ=BÞj þ jKððy þ cðtÞÞ=bÞ  Kððw þ cðtÞÞ=BÞj

pC13 jb  Bj maxfb1 ; B1 g

for yARþ ; b; B40

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

220

with a constant C13 40: Hence, by construction of the sets Uk ;    1 X n   1 1 max sup  ðb Kb ðy; Yin Þ  bk Kbk ðuk ; Yin ÞÞ  k¼1;y;n2 ðy;bÞAUk n i¼1 pC13 bðnÞ2 þ

max

sup

k¼1;y;n2 ðy;bÞAUk

max

sup

k¼1;y;n2 ðy;bÞAUk

ðjy  uk j þ jb  bk jÞ

n 1X jb1  b1 k j jKbk ðuk ; Yin Þj n i¼1

¼ Oðn1 bðnÞ2 Þ: This also holds true if Yin is replaced by Y˜ i : Therefore the proof is complete.

&

Proof of Theorem 3.1. (i) Case meD: Obviously, # n Þ1=2 ðjgˆ n ðUn ðxÞÞ  gðUn ðxÞÞj þ jgðUn ðxÞÞ  gðuðxÞÞjÞ j fˆn ðxÞ  f ðxÞjp detðS # n Þ1=2  detðSÞ1=2 j þ jgðuðxÞÞj jdetðS

for xAD;

where Un ðxÞ :¼ ðx  m# n ÞT S# 1 # n Þ; uðxÞ ¼ ðx  mÞT S1 ðx  mÞ: Choose Z40 n ðx  m T such that there are M3 ; M4 40 with ½M3 ; M4 *fðx  mÞ % S% 1 ðx  mÞ: % xAD; %  SjjpZg: Now choose m; M40 such that cð½M3 ; M4 ÞC½m; M : jjm%  mjjpZ; jjS By (2.3), jjm# n  mjjpZ; jjS# n  SjjpZ for nXn1 ðoÞ: Then we obtain sup j fˆn ðxÞ  f ðxÞjp detðS# n Þ1=2 xAD

sup

jgˆ n ðyÞ  gðyÞj

yA½M3 ;M4

þ detðS# n Þ1=2

jg0 ðyÞj sup jUn ðxÞ  uðxÞj

sup

xAD

yA½M3 ;M4

þ

sup

# nÞ jgðyÞj jdetðS

1=2

 detðSÞ1=2 j

ð4:7Þ

yA½M3 ;M4

for nXn1 ðoÞ: An application of Lemmas 4.3–4.5, 4.11 and (4.1) leads to sup

jgˆ n ðyÞ  gðyÞjp const  sup

yA½M3 ;M4

¼O

sup

xA½m;M bA½bn ;b%n

% pffiffiffiffiffiffiffiffi 1=2

ln nðnbðnÞÞ

jhˆn ðxÞ  hðxÞj

þ bq ðnÞ



a:s:

ð4:8Þ

Using (2.3) we obtain  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sup jUn ðxÞ  uðxÞj ¼ O n1=2 ln lnðnÞ

ð4:9Þ

a:s:;

xAD

 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi # n Þ1=2  detðSÞ1=2 j ¼ O n1=2 ln lnðnÞ jdetðS

a:s:

In case (i) Theorem 3.1 follows now from (4.7) to (4.10).

ð4:10Þ

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

221

(ii) Case mAD: Here the proof is similar to that of case (i). The main differences are that here M3 ¼ 0 and pffiffiffiffiffiffiffiffi  sup jgˆ n ðyÞ  gðyÞj ¼ O ln nðnbðnÞÞ1=2 þ bg ðnÞ a:s: & yA½0;M4

Now we proceed with proving asymptotic normality. Suppose that Conditions K(1) and Tð2Þ are satisfied. By (4.10),  pffiffiffiffiffiffiffiffiffiffiffiffi nBðnÞ fˆn ðxÞ  f ðxÞ pffiffiffiffiffiffiffiffiffiffiffiffi # n Þ1=2 ðgˆ n ðUn ðxÞÞ  gðuÞÞ þ ðdetðS # n Þ1=2  detðSÞ1=2 ÞgðuÞÞ ¼ nBðnÞðdetðS pffiffiffiffiffiffiffiffiffiffiffiffi # n Þ1=2 ðgˆ n ðUn ðxÞÞ  gðuÞÞ þ oð1Þ a:s: ð4:11Þ ¼ nBðnÞdetðS (Un ¼ Un ðxÞ and u ¼ uðxÞ as in the previous proof). Now we have to consider the convergence of gˆ n ðUn ðxÞÞ  gðuÞ and get pffiffiffiffiffiffiffiffiffiffiffi nbðnÞðgˆ n ðUn Þ  gðuÞÞ pffiffiffiffiffiffiffiffiffiffiffi d=2þ1 0 ¼ nbðnÞs1 c ðUn Þðhˆn ðcðUn Þ; BðnÞÞ  hðcðuÞÞÞ þ An ; d Un where An ¼

pffiffiffiffiffiffiffiffiffiffiffi 1 d=2þ1 0 nbðnÞsd ðUn c ðUn Þ  ud=2þ1 c0 ðuÞÞhðcðuÞÞ:

Since the derivative of the function t/td=2þ1 c0 ðtÞ is bounded on finite intervals ½t1 ; t2 ; t1 40; Eq. (4.9) implies pffiffiffiffiffiffiffiffiffiffiffi nbðnÞ jUn  uj ¼ oð1Þ a:s: jAn jpO Hence pffiffiffiffiffiffiffiffiffiffiffi nbðnÞðgˆ n ðUn Þ  gðuÞÞ pffiffiffiffiffiffiffiffiffiffiffi  d=2þ1 0 ¼ nbðnÞs1 u c ðuÞ þ o ð1Þ ðhˆn ðcðUn Þ; BðnÞÞ  hðcðuÞÞÞ P d þ oP ð1Þ:

ð4:12Þ

We have Un ðxÞA½u=2; 2u for nXn2 ðoÞ: Let m; M40 such that cð½u=2; 2u ÞC½m; M : By Lemmas 4.5 and 4.11, sup

sup

yA½m;M bA½bn ;b%n

%

jhˆn ðy; bÞ  h˜n ðy; bÞj ¼ oððnbðnÞÞ1=2 Þ a:s:

and hˆn ðcðUn Þ; BðnÞÞ ¼ h˜n ðcðUn Þ; BðnÞÞ þ oP ððnbðnÞÞ1=2 Þ:

ð4:13Þ

(nXn2 ðoÞ). The next step is to prove asymptotic normality of ðh˜n ðcðUn Þ; BðnÞÞ  hðcðuÞÞÞ: The following lemma is a classical result by Parzen [12] since h˜n ð:; bðnÞÞ is a kernel estimator for the density h of Y˜ i :

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

222

Lemma 4.12. Suppose that for some integer pX2; hðpÞ exists on ð0; þNÞ and is continuous at y40: Assume that Condition KðpÞ is fulfilled and bðnÞ ¼ C14 n1=ð2pþ1Þ with a constant C14 40: Then pffiffiffiffiffiffiffiffiffiffiffi D nbðnÞðh˜n ðy; bðnÞÞ  hðyÞÞ ! Nðm1 ; s21 Þ; Z 1 Z 1 ð2pþ1Þ=2 1 ðpÞ p 2 h ðyÞ t KðtÞ dt; s1 ¼ hðyÞ K 2 ðtÞ dt: m1 ¼ C14 p! 1 1

Lemma 4.13. Assume that h is Lipschitz continuous in some neighbourhood of cðuÞ: Then jh˜n ðcðUn Þ; bðnÞÞ  h˜n ðcðuÞ; bðnÞÞj ¼ oP ððnbðnÞÞ1=2 Þ:

Proof. Let n3 be such that 2bðnÞpcðuÞ for nXn3 : Using the Lipschitz continuity of K 0 ; jh˜n ðcðUn Þ; bðnÞÞ  h˜n ðcðuÞ; bðnÞÞj    X n   2 1 pn1 bðnÞ  K 0 ððcðuÞ  Y˜ i ÞbðnÞ ÞjcðUn Þ  cðuÞj   i¼1 þ Oðn1 bðnÞ3 Þ

n X

IðjcðuÞ  Y˜ i jp2bðnÞÞðcðUn Þ  cðuÞÞ2

i¼1 1

þ OðbðnÞ ÞIðjcðUn Þ  cðuÞj4bðnÞÞ ¼ An1 þ An2 þ An3 ;

ð4:14Þ

say

(nXn3 ). By (4.9), jcðUn Þ  cðuÞj ¼ Oððln ln n=nÞ1=2 Þ

a:s:

Analogously to Lemma 4.6, one proves    X n   1 1 0 0 ðK ððcðuÞ  Y˜ i ÞbðnÞ Þ  EK ððcðuÞ  Y˜ i ÞbðnÞ ÞÞ    i¼1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼O nbðnÞlnðnÞ a:s: Moreover, EK 0 ððcðuÞ  Y˜ i ÞbðnÞ1 Þ Z 1 ¼ bðnÞ K 0 ðtÞðhðcðuÞ  tbðnÞÞ  hðcðuÞÞÞ dt ¼ Oðb2 ðnÞÞ: 1

ð4:15Þ

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

223

Hence  pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi An1 ¼ O n3=2 bðnÞ2 ln ln n nb2 ðnÞ þ nbðnÞlnðnÞ ¼ oðn1=2 bðnÞ1=2 Þ a:s:

ð4:16Þ

The consistency of density estimators (cf. [12]) and (4.15) lead to An2 ¼ Oðn2 bðnÞ3 ln ln nÞ

n X

IðjcðuÞ  Y˜ i jp2bðnÞÞ

a:s:

i¼1

¼ oP ðn1=2 bðnÞ1=2 Þ:

ð4:17Þ

Further by (4.15), we obtain An3 pOðbðnÞ5 ÞðcðUn Þ  cðuÞÞ4 ¼ oðn1=2 Þ

a:s:

ð4:18Þ

Therefore, the assertion of Lemma 4.13 follows from (4.14) and (4.16) to (4.18).

&

Lemma 4.14. Assume that for some integer pX2; hðpÞ exists on ð0; þNÞ and is continuous at cðuÞ: Let Condition KðpÞ and (3.6) be fulfilled and bðnÞ ¼ C14 n1=ð2pþ1Þ with a constant C14 40: Then pffiffiffiffiffiffiffiffiffiffiffi D nbðnÞðh˜n ðcðUn Þ; BðnÞÞ  hðcðuÞÞÞ ! Nðm2 ; s22 Þ; Z 1 Z 1 ð2pþ1Þ=2 1 ðpÞ h ðcðuÞÞ tp KðtÞ dt; s22 ¼ hðcðuÞÞ K 2 ðtÞ dt: m2 ¼ C14 p! 1 1

Proof. Observe that h˜n ðcðUn ðxÞÞ; BðnÞÞ  hðcðuÞÞ ¼

  bðnÞ ˘ bðnÞ ðhn ðcðUn ðxÞÞ; BðnÞÞ  hðcðuÞÞÞ þ hðcðuÞÞ 1 ; BðnÞ BðnÞ

where h˘n ðy; bÞ ¼ n1 bðnÞ1

n X

Kb ðy; Y˜ i Þ:

k¼1

Further    b1  jKðv=b1 Þ  Kðv=b2 ÞjpC15 1  1½b1 ;b1 ðvÞ b2

for vAR

ð4:19Þ

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

224

with a constant C15 40 provided that b1 Xb2 40: Let n4 ðoÞ such that BðnÞp2bðnÞ for nXn4 ðoÞ: Hence, by consistency of density estimators, pffiffiffiffiffiffiffiffiffiffiffi nbðnÞjh˘n ðcðUn Þ; BðnÞÞ  h˜n ðcðUn Þ; bðnÞÞj pffiffiffi p2 njBðnÞbðnÞ1  1j maxf1; bðnÞBðnÞ1 g  n1 bðnÞ1=2

n X

ðIðjY˜ i  cðUn Þjp2bðnÞÞ þ IðjY˜ i þ cðUn Þjp2bðnÞÞÞ

k¼1

poP

pffiffiffiffiffiffiffiffiffiffiffiffiffi  ln ln n bðnÞ1=2



n1

n X

ðIðjY˜ i  cðuÞjp3bðnÞÞ þ IðjY˜ i þ cðuÞjp3bðnÞÞÞ

k¼1

!

þ IðjcðuÞ  cðUn Þj4bðnÞÞ ¼ oP

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ln ln n bðnÞ ¼ oP ð1Þ

ð4:20Þ

for nXn4 ðoÞ in view of (4.18). Combining Lemma 4.13, (4.19) and (4.20), pffiffiffiffiffiffiffiffiffiffiffi bðnÞ ˜ ðhn ðcðuÞ; bðnÞÞ  hðcðuÞÞÞ þ oP ð1Þ: nbðnÞðh˜n ðcðUn Þ; BðnÞÞ  hðcðuÞÞÞ ¼ BðnÞ Now apply Lemma 4.12 to get Lemma 4.14.

&

Proof of Theorem 3.2. In view of (4.12), (4.13) and Lemma 4.14, we have pffiffiffiffiffiffiffiffiffiffiffi D nbðnÞðgˆ n ðUn ðxÞÞ  gðuÞÞ ! Nðm3 ; s23 Þ; where d=2þ1 0 m3 ¼ s1 c ðuÞm2 ; d u

d=2þ1 0 s23 ¼ s2 c ðuÞÞ2 s22 : d ðu

By virtue of (4.11), the proof of Theorem 3.2 is complete.

&

Acknowledgments The author is grateful to Dr. Hans Crauel for helpful hints.

References [1] T.W. Anderson, Nonnormal multivariate distributions: inference based on elliptically contoured distributions, in: C.R. Rao (Ed.), Multivariate analysis: future directions, North-Holland Series in Statistics and Probability, Vol. 5, North-Holland, Amsterdam, 1993, pp. 1–24. [2] P.J. Bickel, C.A.J. Klaassen, Y. Ritov, J.A. Wellner, Efficient and Adaptive Estimation for Semiparametric Models, Springer, Berlin, 1998.

ARTICLE IN PRESS E. Liebscher / Journal of Multivariate Analysis 92 (2005) 205–225

225

[3] M. Bilodeau, D. Brenner, Theory of Multivariate Statistics, Springer, New York, 1999. [4] A. Cowling, P. Hall, On pseudodata methods for removing boundary effects in kernel density estimation, J. R. Statist. Soc. Ser. B 58 (1996) 551–563. [5] H. Cui, X. He, The consistence of semiparametric estimation of elliptic densities, Acta Math. Sin. New Ser. 11 (special issue) (1995) 44–58. [6] H. El Barmi, J.S. Simonoff, Transformation-based density estimation for weighted distributions, J. Nonparametric Stat. 12 (2000) 861–878. [7] K.-T. Fang, S. Kotz, K. Ng, Symmetric Multivariate and Related Distributions, Chapman & Hall, London, 1990. [8] K.-T. Fang, Y. Zhang, Generalized Multivariate Analysis, Springer, Berlin, 1990. [9] M.C. Jones, J.S. Marron, S.J. Sheather, Progress in data-based bandwidth selection for kernel density estimation, Comput. Statist. 11 (1996) 337–381. [10] E. Liebscher, On a class of plug-in methods of bandwidth selection for kernel density estimators, Statist. Decisions 16 (1998) 229–243. [11] R. Maronna, Robust M-estimators of multivariate location and scatter, Ann. Statist. 4 (1976) 51–67. [12] E. Parzen, On estimation of a probability density function and mode, Ann. Math. Statist. 33 (1962) 1065–1076. [13] D. Pollard, Convergence of Stochastic Processes, Springer, Berlin, 1984. [14] D.W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization, Wiley, New York, 1992. [15] B.W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman & Hall, London, 1986. [16] W. Stute, A law of the logarithm for kernel density estimators, Ann. Probab. 10 (1982) 414–422. [17] W. Stute, U. Werner, Nonparametric estimation of elliptically contoured densities, in: G. Roussas (Ed.), Nonparametric Functional Estimation and Related Topics, NATO ASI Ser., Ser. C 335, Kluwer Academic, Dordrecht, 1991, pp. 173–190. [18] S. Zhang, R.J. Karunamuni, M.C. Jones, An improved estimator of the density function at the boundary, J. Am. Stat. Assoc. 94 (1999) 1231–1241.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.