Multivariate Archimax copulas

July 19, 2017 | Autor: Arthur Charpentier | Categoría: Statistics, Multivariate Analysis
Share Embed


Descripción

Journal of Multivariate Analysis 126 (2014) 118–136

Contents lists available at ScienceDirect

Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva

Multivariate Archimax copulas A. Charpentier a , A.-L. Fougères b , C. Genest c,∗ , J.G. Nešlehová c a

Département de mathématiques, Université du Québec à Montréal, C.P. 8888, Succursale Centre-ville, Montréal (Québec), Canada H3C 3P8 b Université de Lyon, CNRS UMR 5208, Université Lyon 1, Institut Camille-Jordan, 43, boul. du 11 novembre 1918, F-69622 Villeurbanne Cedex, France c Department of Mathematics and Statistics, McGill University, 805, rue Sherbrooke ouest, Montréal (Québec), Canada H3A 0B9

article

info

Article history: Received 10 August 2013 Available online 27 January 2014 AMS subject classifications: 60E10 60G70 62H05

abstract A multivariate extension of the bivariate class of Archimax copulas was recently proposed by Mesiar and Jágr (2013), who asked under which conditions it holds. This paper answers their question and provides a stochastic representation of multivariate Archimax copulas. A few basic properties of these copulas are explored, including their minimum and maximum domains of attraction. Several non-trivial examples of multivariate Archimax copulas are also provided. © 2014 Elsevier Inc. All rights reserved.

Keywords: Archimedean copula Domain of attraction Multivariate extreme-value distribution Stable tail dependence function Williamson d-transform

1. Introduction A d-variate copula is the joint cumulative distribution function of a vector (U1 , . . . , Ud ) of random variables each having a uniform distribution on the interval (0, 1). Following Capéraà et al. [5], a bivariate copula is said to be Archimax if it can be written, for all u1 , u2 ∈ (0, 1), in the form



Cψ,A (u1 , u2 ) = ψ {ψ −1 (u1 ) + ψ −1 (u2 )}A



ψ −1 (u1 ) − 1 ψ (u1 ) + ψ −1 (u2 )



,

(1)

using maps A : [0, 1] → [1/2, 1] and ψ : [0, ∞) → [0, 1] such that (i) A is convex and, for all t ∈ [0, 1], max(t , 1 − t ) ≤ A(t ) ≤ 1; (ii) ψ is convex, decreasing and such that ψ(0) = 1 and limx→∞ ψ(x) = 0, with the convention that ψ −1 (0) = inf{x ≥ 0 : ψ(x) = 0}. The term Archimax was chosen by Capéraà et al. [5] to reflect the fact that if A ≡ 1, Cψ,A reduces to an Archimedean copula, viz. Cψ (u1 , u2 ) = ψ{ψ −1 (u1 ) + ψ −1 (u2 )}



Corresponding author. E-mail address: [email protected] (C. Genest).

0047-259X/$ – see front matter © 2014 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jmva.2013.12.013

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

119

while if ψ(t ) = e−t for all t ∈ [0, ∞), Cψ,A is an extreme-value copula, viz.



CA (u1 , u2 ) = exp ln(u1 u2 )A



ln(u1 )



.

ln(u1 u2 ) In [5], Archimax copulas are presented as a tool for constructing bivariate distribution functions in the maximum domain of attraction of an extreme-value copula CA⋆ where, for all t ∈ (0, 1), A⋆ (t ) = {t 1/α + (1 − t )1/α }α Aα



t 1/α



t 1/α + (1 − t )1/α

(2)

when the map t → ψ −1 (1 − 1/t ) is regularly varying at infinity of degree −1/α with α ∈ (0, 1]; see, e.g., p. 13 in [25] for a definition of regular variation. Bivariate Archimax copulas have been further studied and found to be useful in various contexts since their introduction; see, e.g., [1] for applications in hydrology and www.math.sk/wiki/bacigal for a library of R programs. Recently, Bacigál and Mesiar [2] and Mesiar and Jágr [21] proposed an extension of the family (1) to arbitrary dimension d ≥ 3. Their generalization involves the notion of stable tail dependence function originally due to Huang [12]. A function ℓ : [0, ∞)d → [0, ∞) is called a d-variate stable tail dependence function if there exists a d-variate extreme-value copula D such that, for all x1 , . . . , xd ∈ [0, ∞),

ℓ(x1 , . . . , xd ) = − ln{D(e−x1 , . . . , e−xd )}. Let ψ : [0, ∞) → [0, 1] be the generator of a d-variate Archimedean copula Cψ defined, for all u1 , . . . , ud ∈ (0, 1), by Cψ (u1 , . . . , ud ) = ψ{ψ −1 (u1 ) + · · · + ψ −1 (ud )}. As shown by McNeil and Nešlehová [19], this occurs if and only if the map ψ : [0, ∞) → [0, 1] satisfies ψ(0) = 1, limx→∞ ψ(x) = 0 and is d-monotone. The latter property means that ψ has d − 2 derivatives on (0, ∞) and, for all j ∈ {0, . . . , d − 2}, (−1)j ψ (j) ≥ 0 with (−1)d−2 ψ (d−2) being non-increasing and convex on (0, ∞). Mesiar and Jágr [21] suggest that a suitable d-variate extension of the notion of Archimax copula would be obtained by setting, for all u1 , . . . , ud ∈ (0, 1), Cψ,ℓ (u1 , . . . , ud ) = ψ ◦ ℓ{ψ −1 (u1 ), . . . , ψ −1 (ud )}.

(3)

This is indeed reasonable, as when d = 2, one recovers Eq. (1) by setting A(t ) = ℓ(t , 1 − t ) for all t ∈ [0, 1]. While expression (3) appears as formula (18) in [21], these authors merely conjecture that Cψ,ℓ is a copula for any choice of ψ and ℓ. This is their Open Problem 4.1. The purpose of this paper is to solve this problem by showing that Cψ,ℓ as defined above is indeed a copula for any combination of d-variate Archimedean generator ψ and d-variate stable tail dependence function ℓ. This result is established in Section 2 by combining a composition theorem of Morillas [22], a characterization of d-variate Archimedean generators popularized by McNeil and Nešlehová [19], and a recent characterization of stable tail dependence functions due to Ressel [26]. Two different stochastic representations of multivariate Archimax copulas are then provided in Section 3 which shed light on their properties and facilitate simulation; it is also emphasized there that for some d-variate stable tail dependence functions ℓ, the condition on ψ is not necessary for (3) to be a copula. Algorithms for generating observations from this new class of copulas are presented in Section 4. These results are illustrated in Section 5 using new and existing examples of Archimax copulas. The maximum and minimum attractors of multivariate Archimax copula families are then derived in Section 6. Finally, a few remaining challenges are outlined in Section 7, where partial results are offered on the level of dependence that can be achieved by multivariate Archimax copulas. 2. Eq. (3) defines bona fide copulas The purpose of this section is to show the following result, which solves Open Problem 4.1 in [21]. Theorem 2.1. Let ℓ be a d-variate stable tail dependence function and ψ be the generator of a d-variate Archimedean copula. There exists a vector (X1 , . . . , Xd ) of strictly positive random variables such that, for all x1 , . . . , xd ∈ [0, ∞), Pr(X1 > x1 , . . . , Xd > xd ) = ψ ◦ ℓ(x1 , . . . , xd ). In particular, Pr(Xj > xj ) = ψ(xj ) for all xj ∈ [0, ∞) and j ∈ {1, . . . , d}. The proof of this proposition relies on the following recent characterization of stable tail dependence functions due to Ressel [26]. In what follows, ej denotes a d-dimensional vector whose components are all 0 except the jth, which is equal to 1. Furthermore, 1A denotes the indicator of the event A. Theorem 2.2 (Ressel, 2013). A function ℓ : [0, ∞)d → [0, ∞) is a d-dimensional stable tail dependence function if and only if (a) ℓ is homogeneous of degree 1, i.e., for all k ∈ (0, ∞) and x1 , . . . , xd ∈ [0, ∞), ℓ(kx1 , . . . , kxd ) = k ℓ(x1 , . . . , xd ); (b) ℓ(e1 ) = · · · = ℓ(ed ) = 1; (c) ℓ is fully d-max decreasing, i.e., for all x1 , . . . , xd , h1 , . . . , hd ∈ [0, ∞) and any J ⊆ {1, . . . , d} of arbitrary size |J | = k,

 ι1 ,...,ιk ∈{0,1}

(−1)ι1 +···+ιk ℓ(x1 + ι1 h1 11∈J , . . . , xd + ιd hd 1d∈J ) ≤ 0.

120

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

Conditions (a)–(c) imply, but are stronger than, the well-known characteristics of stable tail dependence functions, namely (i) the convexity of ℓ in each of its arguments; and (ii) the fact that, for all x1 , . . . , xd ∈ [0, ∞), one has max(x1 , . . . , xd ) ≤ ℓ(x1 , . . . , xd ) ≤ x1 + · · · + xd .

(4)

The properties (i) and (ii) have long been known to be necessary but insufficient; see, e.g., p. 257 in [3]. Note that Eq. (4) implies that ℓ(x1 , . . . , xd ) = xj when xk = 0 for all k ̸= j, and that the bounds in (4) are themselves stable tail dependence functions, defined, for all x1 , . . . , xd ∈ [0, ∞), by

ℓM (x1 , . . . , xd ) = max(x1 , . . . , xd ),

ℓΠ ( x 1 , . . . , x d ) = x 1 + · · · + x d .

While the extreme-value copula corresponding to ℓΠ characterizes independence, the extreme-value copula with stable tail dependence function ℓM is the upper Fréchet–Hoeffding bound that depicts dependence of comonotonic variables. The latter is defined, for all u1 , . . . , ud ∈ [0, 1], by M (u1 , . . . , ud ) = min(u1 , . . . , ud ). The necessity of condition (c) is easily deduced from the well-known fact [18, Eq. (7.54)] that if ℓ is a stable tail dependence function, then there exists a random vector (Z1 , . . . , Zd ) supported on [0, 1]d such that, for all x1 , . . . , xd ∈ [0, ∞),

 ℓ(x1 , . . . , xd ) = lim t Pr

d  

t →∞

Zj > 1 −

j =1

xj 



t

.

To show that the sum in condition (c) cannot be positive, it suffices to check that, for all t ∈ (0, ∞), ι1 +···+ιk



(−1)

Pr

 

ι1 ,...,ιk ∈{0,1}

Zj > 1 −

j̸∈J

xj  t





Zj > 1 −

j∈J

xj t

− ιj

hj

− ιj

hj



t

is inferior or equal to zero. Given that

(−1)ι1 +···+ιk = 0,



ι1 ,...,ιk ∈{0,1}

this inequality follows directly from the non-negativity of ι1 +···+ιk



(−1)

Pr

 

ι1 ,...,ιk ∈{0,1}

= Pr

Zj ≤ 1 −

j̸∈J

  j̸∈J

Zj ≤ 1 −

xj  t





Zj ∈

xj  t

 1−

j∈J





Zj ≤ 1 −

j∈J

x j + hj t

,1 −

xj t

xj t





t

.

Condition (c) is readily seen to be equivalent to the property that the function f : (−∞, 0]d → (−∞, 0] defined, for all y1 , . . . , yd ∈ (−∞, 0], by f (y1 , . . . , yd ) = −ℓ(−y1 , . . . , −yd )

(5)

is totally increasing in the sense of Morillas [22]. Her condition states that for all y1 , . . . , yd ∈ (−∞, 0] and J ⊆ {1, . . . , d} of arbitrary size |J | = k,



(−1)k−ι1 −···−ιk f (y1 + ι1 h1 11∈J , . . . , yd + ιd hd 1d∈J ) ≥ 0,

(6)

ι1 ,...,ιk ∈{0,1}

where for all j ∈ J , hj is a positive constant such that yj + hj ≤ 0. Setting xj = −yj − hj 1j∈J , one can see that the left-hand side of (6) is





  (−1)k−ι1 −···−ιk ℓ x1 + (1 − ι1 )h1 11∈J , . . . , xd + (1 − ιd )hd 1d∈J .

ι1 ,...,ιk ∈{0,1}

This expression is non-negative if and only if (c) holds. The scene is now set for the proof of Theorem 2.1. Proof of Theorem 2.1. Suppose that ℓ is a d-dimensional stable tail dependence function and let ψ be the generator of a d-variate Archimedean copula. As shown by Morillas [22] and McNeil and Nešlehová [19], the latter holds if and only if the function ψ Ď : (−∞, 0] → [0, 1] defined, for all x ∈ (−∞, 0], by ψ Ď (x) = ψ(−x) is absolutely monotone of order d. Theorem 2.3 in [22] thus implies that if f is given by (5), then ψ Ď ◦ f is totally increasing. Now because ℓ satisfies condition (b) in Theorem 2.2, ψ Ď ◦ f has margins ψ Ď . Furthermore, given that ψ is continuous, it follows from Corollary 1 of Ressel [26] that ψ Ď ◦ f is itself continuous. Hence ψ Ď ◦ f is a distribution function on (−∞, 0]d . This is equivalent to saying that the function defined, for all x1 , . . . , xd ∈ [0, ∞), by

ψ Ď ◦ f (−x1 , . . . , −xd ) = ψ ◦ ℓ(x1 , . . . , xd ) is a survival function on [0, ∞)d . Finally, for all j ∈ {1, . . . , d} and x ∈ [0, ∞), one has Pr(Xj > x) = ψ ◦ ℓ(x ej ) = ψ(x). This completes the argument. 

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

121

In view of Sklar’s Theorem for survival functions [19, Theorem 2.1] and Theorem 2.1, the survival copula of (X1 , . . . , Xd ) is Cψ,ℓ , i.e., for all x1 , . . . , xd ∈ [0, ∞), Pr(X1 > x1 , . . . , Xd > xd ) = Cψ,ℓ {Pr(X1 > x1 ), . . . , Pr(Xd > xd )}. This result, which solves Open Problem 4.1 in [21], is formally stated below. Corollary 2.3. Let ℓ be a d-variate stable tail dependence function and ψ be the generator of a d-variate Archimedean copula. Then the function Cψ,ℓ defined by Eq. (3) is a copula. Before exploring the stochastic structure and properties of Archimax copulas, it may be instructive to look at a few simple examples. Example 2.4. Let D be the independence copula. Then ℓ = ℓΠ and hence, for all u1 , . . . , ud ∈ (0, 1), Cψ,ℓ (u1 , . . . , ud ) = ψ{ψ −1 (u1 ) + · · · + ψ −1 (ud )} is a d-variate Archimedean copula. Similarly, if ψ(x) = e−x for all x > 0, ψ is then the Archimedean generator of the d-variate independence copula and Cψ,ℓ (u1 , . . . , ud ) = − exp[−ℓ{− ln(u1 ), . . . , − ln(ud )}] = D(u1 , . . . , ud ), i.e., Cψ,ℓ reduces to the extreme-value copula D corresponding to ℓ. This justifies the name ‘‘Archimax’’ copula, exactly as in the bivariate case [5]. Example 2.5. For arbitrary θ ≥ 1, the stable tail dependence function of the d-variate Gumbel extreme-value copula is defined, for all x1 , . . . , xd > 0, by

ℓθ (x1 , . . . , xd ) = (xθ1 + · · · + xθd )1/θ .

(7)

This is also called the (symmetric) logistic model; see, e.g., Section 9.2.2 in [3]. Given an arbitrary generator ψ , this choice of stable tail dependence function leads to the copula defined, for all u1 , . . . , ud ∈ (0, 1), by Cψ,θ (u1 , . . . , ud ) = ψ

 1/θ  {ψ −1 (u1 )}θ + · · · + {ψ −1 (ud )}θ ,

(8)

i.e., an Archimedean copula with generator defined, for all x > 0, by ψθ (x) = ψ(x1/θ ). Clearly, Cψ,1 is the d-variate Archimedean copula with generator ψ . Furthermore, ℓθ → ℓM pointwise as θ → ∞. Note that if ψ is the Laplace transform of some random variable V , it follows from Theorem 3.6 of Hofert [10] that ψθ is the Laplace transform of SV θ , where S is a stable random variable, independent of V . 3. Archimax copula representations The previous developments suggest interesting and useful stochastic representations for Archimax copulas which are described next. First, the case where ψ is a Laplace transform is examined in Section 3.1; the more general case where ψ is merely d-monotone is then handled in Section 3.2. 3.1. ψ is a Laplace transform Suppose that ψ is the Laplace transform of a strictly positive random variable Θ with distribution function G. In other words, for all x ∈ [0, ∞), one has

ψ(x) =





e−xθ dG(θ ).

0

In view of Bernstein’s Theorem [29, Section 12, p. 160], ψ is completely monotone, i.e., it is differentiable of any order and for all k ∈ N, (−1)k ψ (k) ≥ 0. Let ℓ be a d-variate stable tail dependence function and consider a random vector (T1 , . . . , Td ) whose survival function is given, for all t1 , . . . , td ∈ (0, ∞), by Pr(T1 > t1 , . . . , Td > td ) = exp{−ℓ(t1 , . . . , td )} = D(e−t1 , . . . , e−td ).

(9)

In other words, T1 , . . . , Td are unit exponential random variables whose survival copula is an extreme-value copula D with stable tail dependence function ℓ.

122

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

Proposition 3.1. The copula Cψ,ℓ is Archimax with d-variate stable tail dependence function ℓ and completely monotone Archimedean generator ψ if and only if it is the survival copula of the random vector

(X1 , . . . , Xd ) = (T1 /Θ , . . . , Td /Θ ) ,

(10)

where Θ has Laplace transform ψ and is stochastically independent of the random vector (T1 , . . . , Td ) defined in (9). Proof. Upon conditioning and invoking the independence between (T1 , . . . , Td ) and Θ , one finds, for all x1 , . . . , xd ∈ [0, ∞), ∞



Pr(X1 > x1 , . . . , Xd > xd ) =

Pr(T1 > θ x1 , . . . , Td > θ xd ) dG(θ )

0 ∞ =

exp{−θ ℓ(x1 , . . . , xd )} dG(θ )

0

= ψ ◦ ℓ(x1 , . . . , xd ). In other words, the survival function of (X1 , . . . , Xd ) is precisely ψ ◦ ℓ and the statement follows from Sklar’s Theorem for survival functions.  Remark 3.2. The construction (10) appears in the work of Li [15], who also points out that if the survival function of the random vector (T1 , . . . , Td ) is of the form (9), the joint distribution of the random variables T1∗ = 1/T1 , . . . , Td∗ = 1/Td is then given, for all t1 , . . . , td ∈ (0, ∞), by Pr(T1∗ ≤ t1 , . . . , Td∗ ≤ td ) = exp{−ℓ(1/t1 , . . . , 1/td )} = D{Φ (t1 ), . . . , Φ (td )}, where Φ denotes the cumulative distribution function of the unit Fréchet extreme-value distribution. If Θ has distribution G and Laplace transform ψ , one can then proceed exactly as in the proof of Proposition 3.1 to check that Cψ,ℓ is the copula of the random vector

(X1 , . . . , Xd ) = (Θ T1∗ , . . . , Θ Td∗ ). In other words, scale mixtures of multivariate extreme-value distributions with unit Fréchet margins have Archimax dependence structures. Furthermore one has, for all x1 , . . . , xd ∈ R, Pr(X1 ≤ x1 , . . . , Xd ≤ xd ) =





D{Φ θ (x1 ), . . . , Φ θ (xd )} dG(θ ).

0

The latter construction can be traced back to the work of Marshall and Olkin [17]; see also [13]. 3.2. ψ is d-monotone To encompass the more general case where ψ is only d-monotone, consider the map defined, for all x ∈ [0, ∞), by ψ0 (x) = {max(0, 1 − x)}d−1 . Given that ψ0 is d-monotone [19, Example 2.2], it follows from Theorem 2.1 that there exists a vector (S1 , . . . , Sd ) of strictly positive random variables such that, for all s1 , . . . , sd ∈ [0, ∞),

¯ ℓ (s1 , . . . , sd ) = [max{0, 1 − ℓ(s1 , . . . , sd )}]d−1 . Pr(S1 > s1 , . . . , Sd > sd ) = G

(11)

It is easily seen that the support of this joint survival function is included in

Ωd (ℓ) = {(s1 , . . . , sd ) ∈ [0, 1]d : ℓ(s1 , . . . , sd ) ≤ 1} and that S1 , . . . , Sd are (dependent) Beta random variables with parameters 1 and d − 1, denoted B (1, d − 1). Now introduce a strictly positive random variable R with distribution function F which is independent of (S1 , . . . , Sd ) and consider the random vector

(X1 , . . . , Xd ) = (RS1 , . . . , RSd ).

(12)

The following result gives a stochastic representation of Archimax copulas. Theorem 3.3. (i) If (X1 , . . . , Xd ) is a random vector of the form (12), then its survival copula is the Archimax copula Cψ,ℓ , where ψ is the Williamson d-transform of R, i.e., for all x ∈ [0, ∞),

ψ(x) =







1−

x

x d−1 r

dF (r ).

(ii) Let ℓ be a d-variate stable tail dependence function and ψ be a generator of a d-dimensional Archimedean copula. Then Cψ,ℓ is the survival copula of a random vector (X1 , . . . , Xd ) of the form (12), where the distribution function F of R is the inverse

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

123

Williamson d-transform of ψ , i.e., for all r ∈ (0, ∞), F (r ) = 1 −

d−2  (−1)k r k ψ (k) (r )

k!

k=0

(d−1)

where ψ+



(−1)d−1 r d−1 ψ+(d−1) (r ) , (d − 1)!

denotes the right-hand derivative of ψ (d−2) .

Proof. To establish (i), observe first that ψ is the common marginal survival function of X1 , . . . , Xd . Indeed, fix j ∈ {1, . . . , d} and set xk = 0 for all k ̸= j. Given that Xj = RSj , one can use the independence between R and Sj to write Pr(Xj > xj ) =







Pr Sj >

xj 

0

r



dF (r ) =





1−

xj d−1 r

xj

dF (r ),

where the fact that the distribution of Sj is B (1, d − 1) was used. The last expression is precisely the Williamson d-transform of R evaluated at xj , and hence for all xj ∈ [0, ∞), Pr(Xj > xj ) = ψ(xj ). Next, for given x1 , . . . , xd ∈ [0, ∞), write Pr(X1 > x1 , . . . , Xd > xd ) =







Pr S1 >

0 ∞ = ℓ(x1 ,...,xd )



x1 r

1−ℓ

xd 

, . . . , Sd > x

1

r

,...,

r

dF (r )

xd d−1 r

dF (r ).

Using the homogeneity of ℓ, this expression can be rewritten as Pr(X1 > x1 , . . . , Xd > xd ) =





 1−

ℓ(x1 , . . . , xd )

ℓ(x1 ,...,xd )

r

d−1

dF (r )

= ψ ◦ ℓ(x1 , . . . , xd ). Hence ψ ◦ ℓ is the survival function of the vector (X1 , . . . , Xd ). To compute the underlying survival copula, it suffices to set xj = ψ −1 (uj ) for each j ∈ {1, . . . , d} because ψ is the survival function of Xj . The converse statement (ii) follows directly from Proposition 3.1 in [19] by retracing the above steps.  Remark 3.4. The distribution of the random vector (S1 , . . . , Sd ) defined in (11) is related to the multivariate generalized Pareto distribution of Falk and Reiß [6]. The latter is defined, for all x1 , . . . , xd ∈ (−∞, 0], by L(x1 , . . . , xd ) = max{0, 1 − ℓ(−x1 , . . . , −xd )}, in terms of a d-variate stable tail dependence function ℓ which is such that L is indeed a distribution function. This is automatically true when d = 2 but not necessarily when d > 2; counterexamples are given in [6,11]. Now suppose that (W11 , . . . , W1d ), . . . , (W(d−1)1 , . . . , W(d−1)d ) is a random sample of size d − 1 from the multivariate generalized Pareto distribution L, and for each j ∈ {1, . . . , d}, let Mj = − max(W1j , . . . , W(d−1)j ). The survival function of (M1 , . . . , Md ) is then ¯ ℓ in (11). This is because for all m1 , . . . , md ∈ [0, ∞), precisely G Pr(M1 > m1 , . . . , Md > md ) = {Pr(W1 < −m1 , . . . , Wd < −md )}d−1

= {L(−m1 , . . . , −md )}d−1 = [max{0, 1 − ℓ(m1 , . . . , md )}]d−1 . The following example details the relation between representations (10) and (12). Example 3.5. Suppose that ψ(x) = e−x for all x > 0. As shown in Example 2.4, Cψ,ℓ is then the extreme-value copula D with stable tail dependence function ℓ. From Example 3.2 in [19], ψ is the Williamson d-transform of an Erlang distribution with mean d. Thus if the variable R in Eq. (12) has this distribution, (X1 , . . . , Xd ) has the same distribution as (T1 , . . . , Td ), denoted d

(X1 , . . . , Xd ) = (T1 , . . . , Td ),

(13)

where (T1 , . . . , Td ) is as in Eq. (9), i.e., a random vector with unit exponential margins and survival copula D. Next, suppose that ψ is completely monotone, i.e., it is the Laplace transform of a strictly positive random variable Θ . According to Proposition 1 in [20], this is the case if and only if ψ is the Williamson d-transform of Zd /Θ , where Zd is independent of Θ and has an Erlang distribution with mean d. Thus if R = Zd /Θ in Eq. (12), it follows from (13) that the vector (X1 , . . . , Xd ) has the same distribution as (T1 /Θ , . . . , Td /Θ ) with (T1 , . . . , Td ) defined in Eq. (9). In other words, one recovers precisely the stochastic representation (10). Proceeding as in the proof of Theorem 3.3, one can also see that for specific choices of d-variate stable tail dependence functions ℓ, the function Cψ,ℓ defined in (3) can be a copula even if ψ is not d-monotone. Examples to this effect are provided below. Thus the conditions on ψ and ℓ in Corollary 2.3 are sufficient but not necessary.

124

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

Example 3.6. Consider the stable tail dependence function ℓM corresponding to comonotonic extreme-value random variables. Let V0 be a uniform random variable on (0, 1) and set V1 = · · · = Vd = V0 . Then, for all x1 , . . . , xd ∈ [0, ∞), Pr(V1 > x1 , . . . , Vd > xd ) = max{0, 1 − ℓM (x1 , . . . , xd )}. Next, let ψ be the generator of a bivariate Archimedean copula. Let also R be a strictly positive random variable with Williamson 2-transform ψ . If R is independent of V0 , the survival function of R(V1 , . . . , Vd ) is then ψ ◦ ℓM as, for all x1 , . . . , xd ∈ [0, ∞), Pr(RV0 > x1 , . . . , RV0 > xd ) =







max 0, 1 −

max(x1 , . . . , xd )



r

0

dF (r )

= ψ ◦ ℓM (x1 , . . . , xd ). Thus for this specific choice of stable tail dependence function, Cψ,ℓM is a copula whenever ψ is a 2-monotone Archimedean generator. Hence when d > 2, the condition that ψ is d-monotone is sufficient but not necessary for this choice of ℓ. Example 3.7. Example 3.6 can be generalized as in the work of Bacigál and Mesiar [2]. For fixed integer d ≥ 2, consider a partition D = {D1 , . . . , Dk } of {1, . . . , d} into 1 ≤ k ≤ d disjoint, non-empty sets. These authors consider the stable tail dependence function defined, for every θ ≥ 1 and x1 , . . . , xd ∈ [0, ∞), by



k  ℓD ,θ (x1 , . . . , xd ) = {max(xi : i ∈ Dj )}θ

1/θ .

j =1

Let (V01 , . . . , V0k ) be a random vector with survival function

k−1

¯ θ (x1 , . . . , xk ) = max 0, 1 − ℓθ (x1 , . . . , xk ) G 



,

where ℓθ is the stable tail dependence function of the logistic distribution already encountered in Eq. (7). Next define a vector (V1 , . . . , Vd ) by setting Vi = V0j for each i ∈ Dj and j ∈ {1, . . . , k}. Its joint survival function is then given, for all x1 , . . . , xd ∈ [0, ∞), by Pr(V1 > x1 , . . . , Vd > xd ) = Pr{V01 > max(xi : i ∈ D1 ), . . . , V0k > max(xi : i ∈ Dk )}

  k−1 = max 0, 1 − ℓD ,θ (x1 , . . . , xd ) . Now let ψ be a k-monotone Archimedean generator. Mimicking the calculation detailed in Example 3.6, one can see that ψ ◦ ℓD ,θ is the survival function of the random vector (RV1 , . . . , RVd ), where R is a strictly positive random variable, independent of (V01 , . . . , V0k ), whose Williamson k-transform is ψ . Thus for this specific choice of ℓD ,θ , Cψ,ℓD ,θ is a copula whenever ψ is a k-monotone Archimedean generator. When k = d, Cψ,ℓD ,θ reduces to the Archimedean copula Cψ . Note that when θ = 1, Cψ,ℓD ,1 is considered by Mesiar and Jágr [21] and the above derivation constitutes an alternative proof to their Theorem 4.2. Finally, note that the stochastic representation given in Theorem 3.3 also gives an alternative characterization of multivariate stable tail dependence functions. Corollary 3.8. A function ℓ : [0, ∞)d → [0, ∞) is a d-dimensional stable tail dependence function if and only if (a) ℓ is homogeneous of degree 1, i.e., for all k ∈ (0, ∞) and x1 , . . . , xd ∈ [0, ∞), ℓ(kx1 , . . . , kxd ) = k ℓ(x1 , . . . , xd ); (b) The function given, for all x1 , . . . , xd ∈ [0, ∞), by

¯ ℓ (x1 , . . . , xd ) = [max{0, 1 − ℓ(x1 , . . . , xd )}]d−1 G defines a d-variate survival function with B (1, d − 1) margins. 4. Simulation algorithms Given the stochastic representations (10) and (12), the strategy for generating random samples from multivariate Archimax copulas is straightforward, at least in principle. Generic algorithms are described below in both cases, but the subsequent discussion will show that they are deceptively simple. Algorithm 4.1. Let ℓ be the d-variate stable tail dependence function associated to an extreme-value copula D, and let ψ be a d-variate Archimedean copula generator. Suppose that ψ is completely monotone, i.e., the Laplace transform of a random variable Θ . To simulate an observation (U1 , . . . , Ud ) from a d-variate Archimax copula Cψ,ℓ , proceed as follows: 4.1.1 Generate an observation (V1 , . . . , Vd ) from extreme-value copula D. 4.1.2 Set T1 = − ln(V1 ), . . . , Td = − ln(Vd ).

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

125

4.1.3 Generate an observation Θ . 4.1.4 Set U1 = ψ(T1 /Θ ), . . . , Ud = ψ(Td /Θ ). In order to carry out Algorithm 4.1, one must be able to simulate observations from a d-variate extreme-value copula (Step 4.1.1), and from a random variable whose Laplace transform is given (Step 4.1.3). Step 4.1.1 is notoriously difficult, except in a few special cases. For example, Stephenson [28] gives a simulation scheme that can be used for various multivariate extreme-value distributions derived as extensions of the logistic model. More recently, Fougères et al. [7] describe simulation algorithms for both generalized logistic distributions and multivariate extreme-value distributions having discrete angular measures. Additionally, a generic simulation procedure based on the Poisson process representation of the spectral measure is sketched by Nadarajah [23]. Simulation methods for Step 4.1.3 can be found in [10]. If ψ is not a Laplace transform, observations from Cψ,ℓ can be generated as follows. Algorithm 4.2. Let ℓ be the d-variate stable tail dependence function associated to an extreme-value copula D, and let ψ be a d-variate Archimedean copula generator. To simulate an observation (U1 , . . . , Ud ) from a d-variate Archimax copula Cψ,ℓ , carry out the steps below: 4.2.1 Generate an observation (S1 , . . . , Sd ) from the joint survival function defined, for all s1 , . . . , sd ∈ [0, ∞), by

¯ ℓ (s1 , . . . , sd ) = [max{0, 1 − ℓ(s1 , . . . , sd )}]d−1 . G 4.2.2 Generate R from the distribution function defined, for all r ∈ (0, ∞), by F (r ) = 1 −

d−2  (−1)k r k ψ (k) (r ) k=0

k!



(−1)d−1 r d−1 ψ+(d−1) (r ) , (d − 1)!

(d−1)

where ψ+ denotes the right-hand derivative of ψ (d−2) . 4.2.3 Set U1 = ψ(RS1 ), . . . , Ud = ψ(RSd ). In comparison with the previous procedure, Algorithm 4.2 has the advantage of being valid for any d-monotone Archimedean generator ψ . Unless ψ is the Laplace transform of a standard distribution, Step 4.2.2 is also more convenient than Step 4.1.3 for simulation purposes. In addition, Algorithm 4.2 can be used to simulate any multivariate extreme-value distribution: given that Cψ,ℓ = Cℓ when ψ(x) = e−x for all x > 0, it would suffice to draw R from the Erlang distribution with mean d. The Achilles heel of Algorithm 4.2 is the generation of the vector (S1 , . . . , Sd ) in Step 4.2.1. There are two cases where it can be carried out easily: (a) Suppose that ℓ = ℓΠ is the stable tail dependence function corresponding to the independence copula. Then (S1 , . . . , Sd ) is uniform on the simplex Sd and Cψ,ℓΠ = Cψ is simply an Archimedean copula. In this case, Algorithm 4.2 reduces to the procedure described in Section 5 of McNeil and Nešlehová [19]. (b) Suppose that ℓ = ℓM is the stable tail dependence function associated to comonotonicity. In that case, one finds that, for all x1 , . . . , xd ∈ [0, ∞),

¯ ℓM (s1 , . . . , sd ) = Pr(S1 > x1 , . . . , Sd > xd ) G = [max{0, 1 − ℓM (x1 , . . . , xd )}]d−1 = min[{max(0, 1 − x1 )}d−1 , . . . , {max(0, 1 − xd )}d−1 ] and hence S1 = · · · = Sd with probability 1. As their common marginal distribution is B (1, d − 1), their simulation is trivial. For other choices of ℓ, Step 4.2.1 can be carried out by computing successive conditional distributions; examples are presented in the following section. 5. Illustrations In order to illustrate Algorithm 4.2, its implementation is discussed below for the stable tail dependence functions of two classical families of extreme-value distributions. The bivariate Marshall–Olkin copula is considered in Section 5.1, and the multivariate Gumbel copula is treated in Section 5.2. These examples reveal something of the general nature of survival functions of the form (11). The Bacigál–Jágr–Mesiar Archimax copulas of Example 3.7 are also revisited at the end of Section 5.2. 5.1. Marshall–Olkin extreme-value copulas As mentioned in Remark 3.2, scale mixtures of multivariate extreme-value distributions with unit Fréchet margins have Archimax copulas. The tail dependence properties of such mixtures were studied by Li [15] when the extreme-value copula

126

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

¯ ℓα,β when α = 0.5 and β = 0.3. Other panels: Support of the bivariate copula Cψ,ℓα,β for the same α Fig. 1. Left panel: Support of the survival function G and β when ψ is the Williamson 2-transform of a Bin(n, 0.4) variable R with n = 1 (middle) or n = 4 (right).

is from the Marshall–Olkin class. Bernhart et al. [4] also proposed an extension of the Lévy-frailty copula model of Mai and Scherer [16] whose dependence structure is an Archimax copula Cψ,ℓ with a Marshall–Olkin stable tail dependence function ℓ. In all these papers, however, ψ was a Laplace transform. To see how the restriction on ψ can be lifted using Algorithm 4.2, consider bivariate Marshall–Olkin copulas, which are indexed by parameters α, β ∈ (0, 1). As mentioned, e.g., by Segers [27], their stable tail dependence function is given, for all x1 , x2 ∈ [0, ∞), by

ℓα,β (x1 , x2 ) = x1 + x2 − min(α x1 , β x2 ).

(14)

¯ ℓα,β = max(0, 1 −ℓα,β ), simply draw S1 uniformly To get an observation from a random pair (S1 , S2 ) with survival function G from the interval (0, 1) and observe that the conditional distribution of S2 given S1 = s1 is either degenerate or concentrated on two points, depending on the sign of s1 − γ , where γ =

β ∈ (0, 1). α + β − αβ

More specifically if s1 ≥ γ , then S2 = (1 − s1 )/(1 − β), while if s1 < γ , then

 S2 =

α s1 /β 1 − s1 (1 − α)

with probability α; with probability 1 − α.

The left panel of Fig. 1 shows the support of a pair (S1 , S2 ) with α = 0.5 and β = 0.3. The other two panels show the support of a pair (U , V ) from an Archimax copula Cψ,ℓα,β with the same values of α and β when ψ is the Williamson 2transform of a Bin(n, 0.4) radial variable R either with n = 1 (middle) or n = 4 (right). As an aside, if R were identically equal to 1, the support of (U , V ) would then be as described in Exercise 3.8 of Nelsen [24] and illustrated in his Fig. 3.8a. From the left panel of Fig. 1, one can see that the distribution of (S1 , S2 ) is singular when ℓ is of the form (14). In particular, the event ℓα,β (S1 , S2 ) = 1 occurs with probability 1 − αγ = 1 − τα,β , where τα,β is the value of Kendall’s tau for the bivariate Marshall–Olkin copula; see, e.g., Example 5.5 in [24]. Observe how this single north-west to southeast curve gets reverberated as the range of R increases. It can be seen in general that if R is Bin(n, p) with p ∈ (0, 1), the support of (U , V ) consists of n + 1 north-west to south-east curves, all connected by a single south-west to northeast curve. One might conjecture, therefore, that when the distribution of R is continuous, the mass located on the set {(s1 , s2 ) ∈ [0, 1]2 : ℓα,β (s1 , s2 ) = 1} is spread out continuously on the unit square, but that the remaining singularity, which occurs along the line segment {(s1 , α s1 /β) : s1 ∈ [0, γ ]} with probability αγ , never fades away. Evidence of the mass spreading phenomenon is provided in Fig. 2. Displayed there are four random samples of size 2000 from an Archimax copula Cψ,ℓα,β derived from a Marshall–Olkin stable tail dependence function ℓα,β with parameters α = 0.5 and β = 0.3. The distribution of R varies from graph to graph, namely: Top left: R is Pareto P (ϑ) with ϑ = 4.5, where for any ϑ > 0 and all r ≥ 1, Pr(R ≤ r ) = 1 − r −ϑ ; Top right: 1/R is P (4.5); Bottom left: R has a Gamma distribution G(ϑ, 1) with ϑ = 0.3; Bottom right: 1/R is G(0.3, 1). In all cases, the Williamson 2-transform ψ of R can be computed explicitly; see [20], where samples from the corresponding Archimedean copulas can be found for comparison purposes. The case when R is G(ϑ, 1) is of special interest

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

127

Fig. 2. Random samples of size 2000 from the copula Cψ,ℓα,β when α = 0.5, β = 0.3, and ψ is the Williamson 2-transform of a radial variable R such that R is P (4.5) (top left), 1/R is P (4.5) (top right), R is G(0.3, 1) (bottom left) or 1/R is G(0.3, 1) (bottom right).

because the resulting class of Archimax copulas reduces to the classical Marshall–Olkin extreme-value copula family when ϑ = 2. 5.2. Gumbel extreme-value copulas The Gumbel family, indexed by a parameter θ ∈ [1, ∞), is possibly the best known class of multivariate extreme-value copulas. Its stable tail dependence function ℓθ , given in Eq. (7), corresponds to the multivariate logistic model. As mentioned in Example 2.5, Cψ,ℓ is Archimedean whenever ℓ = ℓθ for some θ ≥ 1. In this case, therefore, Algorithm 4.2 merely describes a different way of simulating copulas of the form (8). Nevertheless, it is instructive to examine how one can simulate from ¯ θ = {max(0, 1 − ℓθ )}d−1 . The following auxiliary distributions play a role in designing such an the survival function G algorithm. Definition 5.1. Fix ξ > 0, θ > 1, and for arbitrary σ ∈ (0, 1), write σθ = (1 − σ )1/θ . Let Hθ ,σ ,ξ and Qθ ,σ ,ξ be univariate distribution functions defined, for all t ∈ R, by Hθ,σ ,ξ (t ) = 1t ∈[0,∞) −



σ tθ + σ

ξ 1t ∈[0,σθ )

and Qθ,σ ,ξ (t ) = 1t ∈[0,∞) −



1 − (t θ + σ )1/θ 1 − σ 1/θ

ξ

1t ∈[0,σθ ] .

Simulating observations from Hθ,σ ,ξ and Qθ,σ ,ξ is simple, given that their inverses are explicit. While Hθ ,σ ,ξ has a jump of size σ ξ at t = σθ , Qθ,σ ,ξ is continuous except in the limiting case where ξ = 0, where it reduces to a point mass at t = σθ . ¯ θ , then for arbitrary As stated below and proved in the Appendix, if (S1 , . . . , Sd ) is a random vector with survival function G k ∈ {1, . . . , d − 1}, the conditional distribution of Sk+1 given S1 = s1 , . . . , Sk = sk can be expressed in terms of Hθ ,σ ,ξ and Qθ ,σ ,ξ with appropriate choices of parameters σ and ξ . The following notation simplifies the statement of this result.

128

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

Notation 5.2. For arbitrary integers k ≤ d − 1, real θ > 1 and i ∈ {1, . . . , k}, let

 ηθ,i,k = θ

k−i

i 

 ( d − m)

k−i  



ιm −

ι1 0, s1 , . . . , sk ∈ (0, 1) and i ∈ {1, . . . , k},

¯ θ,σk ,k−i/θ Q¯ θ ,σk ,d−i−1 1−H is the distribution function of Sk+1,i = min(Sk+1,i,1 , Sk+1,i,2 ), where Sk+1,i,1 and Sk+1,i,2 are independent random variables distributed as Hθ,σk ,k−i/θ and Qθ,σk ,d−i−1 , respectively. Further note that Sk+1,i is continuous when k < d − 1 while when k = d − 1, it has an atom at (1 − σd−1 )1/θ . ¯ θ , consider the case d = 3. To illustrate how Proposition 5.3 translates into an efficient simulation algorithm for G Algorithm 5.4. Let ℓθ be the stable tail dependence function of a trivariate Gumbel extreme-value copula with θ > 1. To ¯ θ = {max(0, 1 − ℓθ )}2 , carry out the simulate an observation from a random vector (S1 , S2 , S3 ) having survival function G steps below: 5.4.1 Generate S1 from the B (1, 2) distribution and set σ1 = S1θ . 5.4.2 Generate S2,1,1 and S2,1,2 independently from Hθ,σ1 ,1−1/θ and Qθ ,σ1 ,1 , respectively. 5.4.3 Set S2 = min(S2,1,1 , S2,1,2 ) and σ2 = S1θ + S2θ .

5.4.4 Generate S3,1,1 , S3,1,2 and S3,2,1 independently from Hθ ,σ2 ,2−1/θ , Qθ ,σ2 ,1 , and Hθ ,σ2 ,2−2/θ , respectively. 5.4.5 Generate an observation U from a Bernoulli distribution with success probability 2/θ −2

π=

θ −1 σ2 2/θ −2

θ −1 σ2

1/θ −2

+ (1 − θ −1 )σ2

1/θ

(1 − σ2 )

.

5.4.6 Set S3 = U × S3,2,1 + (1 − U ) × min(S3,1,1 , S3,1,2 ). Fig. 3 shows two random samples of size 2000 from the distribution with survival function {max(0, 1 − ℓθ )}d−1 with θ = 2. In the left-hand panel d = 2, while d = 3 in the right-hand panel. In both cases, a singularity is clearly visible along the curve ℓθ (S1 , . . . , Sd ) = 1. Note again in passing that when d = 2, the event ℓθ (S1 , S2 ) = S1θ + S2θ = 1 occurs with probability 1/θ = 1 − τθ , where τθ is the value of Kendall’s tau for the bivariate Gumbel copula. Portrayed in Fig. 4 are random samples of size 2000 from the Archimax copula Cψ,ℓθ derived from a Gumbel stable tail dependence function ℓθ with parameter θ = 2. The distribution of R varies from graph to graph, namely: Top left: R is Bernoulli(p) with success probability p = 0.4; Top left: R is Bin(4, 0.4); Middle left: R is P (4.5); Middle right: 1/R is P (4.5);

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

129

Fig. 3. Random samples of size 2000 from the distribution with survival function {max(0, 1 − ℓθ )}d−1 when θ = 2 and d = 2 (left) or d = 3 (right).

Bottom left: R is G(0.3, 1); Bottom right: 1/R is G(0.3, 1). As these graphs involve the same choices of radial distributions used to produce Figs. 1 and 2, pair-wise comparisons give a sense of the impact of the influence of the stable dependence function on the shape of the Archimax copula for fixed ψ . Looking at the top row of Fig. 4, one can again see how the singularity along the curve ℓθ (S1 , S2 ) = 1 gets reverberated n + 1 times when R is Bin(n, p). While this phenomenon disappears when R is continuous, other constraints appear, e.g., in the support of the copula when 1/R is G(ϑ, 1). Finally, note that Algorithm 5.4 can be used to generate samples from the Bacigál–Jágr–Mesiar Archimax copulas Cψ,ℓD ,θ using the stochastic representation given in Example 3.7. As an illustration, Fig. 5 shows a trivariate sample of size 2000 when the partition D is {{1}, {2, 3}} and 1/R is P (4.5). Note that when (U1 , U2 , U3 ) is distributed according to this specific Archimax copula, the pair (U2 , U3 ) is comonotone, while the pairs (U1 , U2 ) and (U1 , U3 ) both have a bivariate GumbelArchimax copula as displayed in the middle right panel of Fig. 4. 6. Extremal behavior of Archimax copulas As in the bivariate case, multivariate Archimax copulas can be used to construct multivariate distribution functions in the domain of attraction of a given multivariate extreme-value copula. To this end, let X1 , X2 , . . . be mutually independent copies of a vector X = (X1 , . . . , Xd ) whose distribution is the Archimax copula Cψ,ℓ , and define for each n ∈ N, Mn = max(X1 , . . . , Xn ),

Wn = min(X1 , . . . , Xn ),

where vector algebra is meant component-wise. It is of interest to find the limiting behavior, as n → ∞, of the sequences (Mn ) and (Wn ). When they exist, these limits characterize the maximum and minimum attractors of X, respectively. The forms of these limits, and conditions for their existence, are given in Sections 6.1 and 6.2 for the case of the maximum and minimum, respectively. 6.1. Maximum attractor Given that the univariate margins of X are uniform on the interval (0, 1), each of them belongs to the maximum domain of attraction of the standard Weibull distribution, meaning that, for all t < 0, lim Pr {n(M1n − 1) ≤ t } = et .

n→∞

As for the dependence between the limiting Weibull random variables, it is ruled by an extreme-value copula Cℓ⋆ which is given, for all u1 , . . . , ud ∈ (0, 1), by 1/n

1/n

n (u1 , . . . , ud ). Cℓ⋆ (u1 , . . . , ud ) = lim Cψ,ℓ n→∞

130

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

Fig. 4. Random samples of size 2000 from the copula Cψ,ℓθ when θ = 2 and ψ is the Williamson 2-transform of a radial variable R such that either R is Bin(n, 0.4) with n = 1 (top left) or n = 4 (top right), R is P (4.5) (middle left) or 1/R is P (4.5) (middle right), and R is G(0.3, 1) (bottom left) or 1/R is G(0.3, 1) (bottom right).

For details, see, e.g., Section 8.3.2 in [3]. By analogy with Proposition 4.1 of Capéraà et al. [5], one might expect the limiting behavior of the sequence (Mn ) to depend on the regular variation of either one of the maps:

κψ : (0, ∞) → (0, ∞) : w → 1 − ψ(1/w), λψ : (0, ∞) → (0, ∞) : w → ψ −1 (1 − 1/w). Recall that a function f : R+ → R+ is regularly varying with index α ∈ R if for all x ∈ R+ , f (tx)/f (t ) → xα as t → ∞. In what follows, the class of regularly varying functions with index α is denoted Rα . As stated, e.g., in Proposition 1 of Larsson and Nešlehová [14],

κψ ∈ R−α ⇔ λψ ∈ R−1/α .

(15)

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

131

Fig. 5. Random samples of size 2000 from the trivariate Bacigál–Jágr–Mesiar Archimax copula Cψ,ℓD ,θ when θ = 2, D = {{1}, {2, 3}} and 1/R is P (4.5).

Proposition 6.1. Suppose that ψ is the generator of a d-variate Archimedean copula such that κψ ∈ R−α for some α ∈ (0, 1]. Then the copula Cψ,ℓ belongs to the maximum domain of attraction of an extreme-value distribution whose unique underlying copula is defined, for all u1 , . . . , ud ∈ (0, 1), by Cℓ⋆ (u1 , . . . , ud ) = exp[−ℓα {| ln(u1 )|1/α , . . . , | ln(ud )|1/α }]. Proof. One can proceed along the same lines as the proof of Proposition 4.1 of Capéraà et al. [5]. Given u1 , . . . , ud ∈ (0, 1), write 1/n

1/n



n Cψ,ℓ (u1 , . . . , ud ) = 1 −

an  n n

,

(16)

where 1/n

1/n

an = n[1 − ψ ◦ ℓ{ψ −1 (u1 ), . . . , ψ −1 (ud )}]. Thus it suffices to show that an → yα as n → ∞, where y = ℓ{| ln(u1 )|1/α , . . . , | ln(ud )|1/α } α to conclude that when the limit is taken in (16), one finds e−y . To establish this claim, first use the homogeneity of ℓ to write

  1 1/n 1/n ℓ{ψ −1 (u1 ), . . . , ψ −1 (ud )} = yn ψ −1 1 − , n

where

 1/n 1/n ψ −1 (u1 ) ψ − 1 ( ud ) yn = ℓ , . . . , −1 . ψ −1 (1 − 1/n) ψ (1 − 1/n) 

132

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

In view of (15), the hypothesis on κψ implies that λψ ∈ R−1/α . Given that convergence in the definition of regular variation is uniform, one has, for each j ∈ {1, . . . , d}, 1/n

1/n

lim

n→∞

ψ −1 (uj )

ψ −1 {1 − n(1 − uj )/n}

= lim

ψ −1 (1 − 1/n)

ψ −1 (1 − 1/n)

n→∞

= | ln uj |1/α .

The continuity of ℓ then implies that yn → y as n → ∞. Finally, one can call directly on the regular variation of κψ to conclude that



lim an = lim n 1 − ψ

n→∞

n→∞

and the argument is complete.



yn ψ −1

 1−

1



n

= yα ,



Remark 6.2. The conditions on κψ in Proposition 6.1 can also be stated in terms of the tail behavior of the radial variable R in the stochastic representation (12). Indeed from Theorem 2 in [14], it is known that κψ ∈ R−α for some α ∈ (0, 1) if and only if 1/R is in the maximum domain of attraction of the Fréchet distribution with parameter α . Furthermore, κψ ∈ R−1 occurs if 1/R is in the maximum domain of attraction of either (i) the Weibull distribution; (ii) the Gumbel distribution; (iii) the Fréchet distribution with parameter α ≥ 1. For the stable tail dependence function ℓΠ corresponding to the independence copula, Proposition 6.1 reduces to the well-known result on the maximum domain of attraction of Archimedean copulas; see, e.g., Genest and Rivest [8] or Larsson and Nešlehová [14]. Proposition 6.1 also extends Proposition 4.1 in [5], which showed that when d = 2, the extreme-value copula is given, for all u1 , u2 ∈ (0, 1), by





Cℓ⋆ (u1 , u2 ) = exp ln(u1 u2 )A

ln(u1 )





ln(u1 ) + ln(u2 )

in terms of a Pickands dependence function A⋆ which is related via (2) to the Pickands dependence function A of ℓ. More specifically, one has the following result. Corollary 6.3. For all u1 , . . . , ud ∈ (0, 1), one has Cℓ⋆ (u1 , . . . , ud ) = exp ln(u1 · · · ud )A⋆ (t1 , . . . , td−1 ) ,





where, for all k ∈ {1, . . . , d}, tk = | ln(uk )|/{| ln(u1 )|+· · ·+| ln(ud )|}, so that (t1 , . . . , td ) is in the simplex Sd = {(x1 , . . . , xd ) ∈ Rd+ : x1 + · · · + xd = 1} and

 ⋆

A (t1 , . . . , td−1 ) = ℓ

α

1/α t1

,...,1 −

d−1 

1/α tk

 ,

k=1

which can equivalently be rewritten as ⋆

1/α t1

A (t1 , . . . , td−1 ) = (

+ ··· +

1/α td α α

) ℓ

1/α

1/α



t1 1/α

t1

1/α

+ · · · + td

,...,



td 1/α

t1

1/α

+ · · · + td

Proof. Fix α ∈ (0, 1] and u1 , . . . , ud ∈ (0, 1). Note that, for k ∈ {1, . . . , d}, 1/α

| ln(uk )|1/α t = 1/α k . 1/α 1 /α | ln(u1 )| + · · · + | ln(ud )|1/α t1 + · · · + td Using the homogeneity of ℓ, one can then write

ℓ{| ln(u1 )|1/α , . . . , | ln(ud )|1/α } = {| ln(u1 )|1/α + · · · + | ln(ud )|1/α }  1/α ×ℓ

t1

1/α

t1

1/α

+ · · · + td

,...,

1/α

t1

1/α 1/α

+ · · · + td

Moreover one has, for all k ∈ {1, . . . , d},

| ln(u1 )|1/α + · · · + | ln(uk )|1/α 1/α 1/α = t1 + · · · + tk . {| ln(u1 )| + · · · + | ln(ud )|}1/α Combining these various facts, one finds that Cℓ⋆ has the announced form.



td



.

.

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

133

6.2. Minimum attractor Given that the univariate margins of X are uniform on the interval (0, 1), each of them belongs to the minimum domain of attraction of the standard exponential distribution. Indeed for arbitrary t > 0, Pr(nW1n > t ) = (1 − t /n)n and hence lim Pr(nW1n > t ) = e−t .

n→∞

As for the copula of Wn , the following proposition shows that its limiting behavior depends on the regular variation of the map

µψ : (0, ∞) → (0, ∞) : w → ψ −1 (1/w). Proposition 6.4. Suppose that ψ is the generator of a d-variate Archimedean copula such that µψ ∈ Rα for some α ∈ (0, ∞). Then the copula Cψ,ℓ belongs to the minimum domain of attraction of an extreme-value distribution whose unique underlying copula is defined, for all u1 , . . . , ud ∈ (0, 1), by D⋆ (u1 , . . . , ud ) =

(−1)ι1 +···+ιd K (ι1 u1 , . . . , ιd ud ),



ι1 ,...,ιd ∈{0,1}

where for arbitrary v1 , . . . , vd ∈ [0, 1],



 

K (v1 , . . . , vd ) = exp −

ι1 +···+ιd

(−1)

ln Cψ ⋆ ,ℓ (1 − ι1 v1 , . . . , 1 − ιd vd ) ,

ι1 ,...,ιd ∈{0,1}

with ψ ⋆ (t ) = exp(−t −1/α ) for all t > 0. Proof. Let C¯ ψ,ℓ be the survival copula corresponding to Cψ,ℓ . It is defined, for all w1 , . . . , wd ∈ (0, 1), by

(−1)ι1 +···+ιd Cψ,ℓ (1 − ι1 w1 , . . . , 1 − ιd wd ).



C¯ ψ,ℓ (w1 , . . . , wd ) =

ι1 ,...,ιd ∈{0,1}

Now let CWn be the copula of Wn . It then follows from Theorem 2 on p. 7 of [9] that, for all u1 , . . . , ud ∈ (0, 1), n (−1)ι1 +···+ιd C¯ ψ,ℓ {(1 − ι1 u1 )1/n , . . . , (1 − ιd ud )1/n }.



CWn (u1 , . . . , ud ) =

ι1 ,...,ιd ∈{0,1}

The problem thus reduces to computing, for any given v1 , . . . , vd ∈ (0, 1), n K (v1 , . . . , vd ) = lim C¯ ψ,ℓ {(1 − v1 )1/n , . . . , (1 − vd )1/n }. n→∞

To find this limit, set w1 = 1 − v1 , . . . , wd = 1 − vd and let 1/n

1/n

(−1)ι1 +···+ιd Cψ,ℓ (1 − ι1 w1 , . . . , 1 − ιd wd )



an = n

ι1 ,...,ιd ∈{0,1},ι1 +···+ιd >0

so that 1/n

1/n



n C¯ ψ,ℓ (w1 , . . . , wd ) = 1 +

an  n n

.

If one can show that an → a as n → ∞, the required limit will then be ea . The result will then follow as soon as one can show that, for any ι1 , . . . , ιd ∈ {0, 1}, 1/n

1/n

lim nCψ,ℓ (1 − ι1 w1 , . . . , 1 − ιd wd ) = − ln{Cψ ⋆ ,ℓ (w1 , . . . , wd )}.

n→∞

To see this, set

 1/n 1/n ψ −1 (1 − ι1 w1 ) ψ −1 (1 − ιd wd ) sn = ℓ ,..., . ψ −1 (1/n) ψ −1 (1/n) 

Given that µψ is regularly varying, one has that 1/n

lim

n→∞

ψ −1 (1 − ιj wj ) ψ −1 (1/n)

= | ln ιj wj |−α

whenever ιj = 1, so that when n → ∞, sn converges to s = {− ln Cψ ⋆ ,ℓ (ω1 , . . . , ωd )}−α . Invoking the homogeneity and continuity of ℓ, and using the fact that as n → ∞, nψ{sn ψ −1 (1/n)} converges to s−1/α , one can conclude. 

134

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

Proposition 6.4 extends Proposition 4.2 in [5] stated in the case d = 2, as well as Proposition 3 in [14] valid for all d ≥ 2 but in the special case where ℓ = ℓΠ . Note that the conditions of Proposition 6.4 are related to the extremal behavior of the radial variable R from the stochastic representation (12) of Archimax copulas. Indeed, it is well known that µψ ∈ Rα if and only if ψ ∈ R−1/α . From Theorem 1 and Proposition 1 in [14], either of these conditions occurs if and only if R is in the maximum domain of attraction of the Fréchet distribution with parameter 1/α . 7. Discussion This paper was initially motivated by the work of Mesiar and Jágr [21], who conjectured that Expression (3) leads to a bona fide copula for any combination of d-variate Archimedean copula generator ψ and stable tail dependence function ℓ. Theorem 2.1 and Corollary 2.3 prove this conjecture and hence confirm the validity of these authors’ d-variate extension of the class of bivariate Archimax copulas. When they introduced bivariate Archimax copulas, Capéraà et al. [5] viewed them primarily as a way of generating copulas with a given extreme-value attractor, e.g., in the context of simulation studies. Clearly, multivariate Archimax copulas can serve the same purpose, and the identification of their maximum and minimum extreme-value attractors provided in Section 6 facilitates this task. This paper also provides, for the first time, stochastic representations of Archimax copulas. The probabilistic constructions given in Section 3 include as special cases, and provide a link between, stochastic representations for Archimedean and extreme-value copulas. In addition to shedding light into the nature of multivariate Archimax copulas, Proposition 3.1 and Theorem 3.3 motivate the simulation algorithms described in Section 4. Various questions remain to be addressed before multivariate Archimax copulas can be used as effectively in dimension d ≥ 3 as they currently are in the bivariate case. As pointed out in Section 4, there are computational issues associated with Algorithms 4.1 and 4.2. Section 5 showed how the latter algorithm could be implemented in two non-trivial, yet relatively simple cases, thereby throwing additional light on survival functions of the form {max(0, 1 −ℓ)}d−1 . Nevertheless, much remains to be understood about these distributions and multivariate Archimax copulas in general. For example when d = 2, Capéraà et al. [5] showed, by direct calculation, that Kendall’s tau τψ,ℓ of an Archimax copula Cψ,ℓ is a weighted average between 1 and Kendall’s tau τψ of the bivariate Archimedean copula Cψ , viz.

τψ,ℓ = τℓ + (1 − τℓ )τψ . Here, the weights are in terms of Kendall’s tau τℓ of the bivariate extreme-value copula Cℓ . It would of course be desirable to extend this relation to the multivariate case, but this task remains elusive at present. Nevertheless, Theorem 3.3 does contain information about the multivariate extension of Kendall’s tau of a d-variate Archimax copula Cψ,ℓ , which is defined by 2d −1 + d−1 Pr(U1 < U1⋆ , . . . , Ud < Ud⋆ ) −1 2 −1 in terms of two independent random vectors (U1 , . . . , Ud ) and (U1⋆ , . . . , Ud⋆ ) with distribution Cψ,ℓ . Indeed, a simple

τψ,ℓ =

2d−1

calculation yields 2d −1 + E[ψ{R ℓ(S1 , . . . , Sd )}], 2d−1 − 1 2d−1 − 1 where R and (S1 , . . . , Sd ) are as in (12). When d = 2 and ℓ = ℓΠ , this formula reduces to the expression given in Proposition 4.7 of McNeil and Nešlehová [19], due to the fact that ℓΠ (S1 , S2 ) = 1 almost surely. Upon writing ψ as the Williamson

τψ,ℓ =

d-transform of R and proceeding as in the proof of Proposition 2 in [20], one also obtains

  2d −1 + E [max{0, 1 − Y ℓ(S1 , . . . , Sd )}]d−1 , 2d−1 − 1 2d−1 − 1 where Y is a ratio of R and an independent copy thereof. When ℓ = ℓΠ , this formula becomes the expression for Kendall’s tau for multivariate Archimedean copulas of McNeil and Nešlehová [20], again because ℓΠ (S1 , . . . , Sd ) = 1 almost surely. As a closing teaser, note that in dimension d = 2, Pr{ℓ(S1 , S2 ) = 1} = 1 −τD , where D is the extreme-value copula corresponding to ℓ. Illustrations of this fact were flagged in Sections 5.1 and 5.2. The proof of this result is left to the reader. τψ,ℓ =

Acknowledgments Most of this work was carried out while Genest and Nešlehová were visiting the Université Claude-Bernard in Lyon; they are grateful to the members of the Institut Camille Jordan for their hospitality. This research was supported by the LABEX MILYON (ANR–10–LABX–0070) of Université de Lyon, within the program ‘‘Investissements d’avenir’’ (ANR–11–IDEX–0007) operated by the Agence nationale pour la recherche. Additional funding for this work was provided by the Canada Research Chairs Program, the Natural Sciences and Engineering Research Council of Canada, and the Fonds de recherche du Québec–Nature et technologies.

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

135

Appendix. Proof of Proposition 5.3 Fix θ > 0 and for arbitrary k ∈ {1, . . . , d} and s1 , . . . , sk ∈ (0, 1), let 1/θ

¯ k (s1 , . . . , sk ) = {max(0, 1 − σk G

σk = sθ1 + · · · + sθk .

)}d−1 ,

Then for any k ∈ {1, . . . , d − 1} and s1 , . . . , sk+1 ∈ (0, 1), one has Pr(Sk+1 ≤ sk+1 |S1 = s1 , . . . , Sk = sk ) = 1 −

∂k ∂ s1 ···∂ sk

¯ k+1 (s1 , . . . , sk+1 ) G

∂k

¯ k+1 (s1 , . . . , sk , 0) G

∂ s1 ···∂ sk

¯ k . A formula to this if sθk+1 < 1 − σk and 1 otherwise. The problem thus reduces to computing the partial derivatives of G effect is given below. ¯ k (s1 , . . . , sk )/∂ s1 · · · ∂ sj = 0, Lemma A.1. Let k ∈ {1, . . . , d} and j ∈ {1, . . . , k} be such that j < k if k = d. If σk > 1, then ∂ j G and if σk < 1, then j  ∂j 1/θ i/θ −j 1 1 ¯ k (s1 , . . . , sk ) = (−1)j (d − 1)sθ− G · · · sθ− ηθ ,i,j (1 − σk )d−i−1 σk . j 1 ∂ s1 · · · ∂ sj i=1

Proof of Lemma A.1. For arbitrary k ∈ {1, . . . , d}, the claim of the lemma can be established by induction on j. When σk < 1 and j = 1,

  1 ∂ ∂ ¯ 1/θ d−1 1/θ d−2 1/θ −1 ∂ (1 − σk ) = (d − 1)(1 − σk ) − σk σk Gk (s1 , . . . , sk ) = ∂ s1 ∂ s1 θ ∂ s1 1/θ

1/θ −1

= −(d − 1)sθ1−1 (1 − σk )d−2 σk

as it should be, in view of the fact that ηθ,1,1 = 1. Next, suppose the claim holds for some j < min(k, d − 1). To establish that it holds for j + 1, one needs to compute

∂ ∂ s j +1

1 1 (−1)j (d − 1)sθ− · · · sθ− j 1

j 

1/θ

i/θ −j

ηθ,i,j (1 − σk )d−i−1 σk

.

(17)

i=1

Now, for arbitrary i ∈ {1, . . . , j}, one has

∂ ∂ s j +1

1/θ d−i−1 i/θ −j k k

ηθ,i,j (1 − σ

)

σ

=

1 −sθ− j+1



1/θ

(i+1)/θ −j−1

ηθ,i,j (d − i − 1)(1 − σk )d−i−2 σk    i 1/θ i/θ −j−1 + ηθ,i,j j − θ (1 − σk )d−i−1 σk . θ

Therefore, Eq. (17) can be rewritten as

 (−1)

j +1

θ−1

(d − 1)s1

1 θ −1 · · · sθ− sj+1 j

j +1 

1/θ

i =2

+

j 



ηθ,i,j j −

i=1

i



θ

1/θ d−i−1 i/θ −j−1 k k

θ (1 − σ

)

σ

Observe that

   j   1 1 ηθ,1,j j − θ = θj m− = ηθ,1,j+1 θ θ m=1 and that

ηθ,j,j (d − j − 1) =

j +1 

(d − m) = ηθ,j+1,j+1 .

m=1

Finally, for i ∈ {2, . . . , j}, it can easily be checked that

  i θ = ηθ,i,j+1 . ηθ,i−1,j (d − i) + ηθ,i,j j − θ Thus the argument is complete.



i/θ −j−1

ηθ,i−1,j (d − i)(1 − σk )d−i−1 σk  .

136

A. Charpentier et al. / Journal of Multivariate Analysis 126 (2014) 118–136

In view of Lemma A.1, it is clear that, when sθk+1 < 1 − σk , one has k 

Pr(Sk+1 ≤ sk+1 |S1 = s1 , . . . , Sk = sk ) = 1 −

1/θ

i/θ −k

1/θ

i/θ −k

ηθ ,i,k (1 − σk+1 )d−i−1 σk+1

i=1 k 

ηθ ,i,k (1 − σk )d−i−1 σk

i=1

= 1−

k  i=1

πi,k



σk σk+1

k−i/θ 

1/θ

1 − σk+1 1/θ

1 − σk

d−i−1 .

Taking into account Notation 5.2, one may thus write Pr(Sk+1 ≤ sk+1 |S1 = s1 , . . . , Sk = sk ) = 1 −

k 

πi,k H¯ θ ,σk ,k−i/θ (sk+1 )Q¯ θ ,σk ,d−i−1 (sk+1 ).

i=1

The statement of Proposition 5.3 now follows because π1,k + · · · + πk,k = 1. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29]

T. Bacigál, V. Jágr, R. Mesiar, Non-exchangeable random variables, Archimax copulas and their fitting to real data, Kybernetika 47 (2011) 519–531. T. Bacigál, R. Mesiar, 3-dimensional Archimax copulas and their fitting to real data, in: COMPSTAT 2012, Limassol, Cyprus, 2012. J. Beirlant, Y. Goegebeur, J. Teugels, J. Segers, Statistics of Extremes, Wiley, Chichester, 2004. G. Bernhart, M. Escobar Anel, J.-F. Mai, M. Scherer, Default models based on scale mixtures of Marshall–Olkin copulas: properties and applications, Metrika 76 (2013) 179–203. P. Capéraà, A.-L. Fougères, C. Genest, Bivariate distributions with given extreme value attractor, J. Multivariate Anal. 72 (2000) 30–49. M. Falk, R.-D. Reiß, On Pickands coordinates in arbitrary dimensions, J. Multivariate Anal. 92 (2005) 426–453. A.-L. Fougères, C. Mercadier, J.P. Nolan, Dense classes of multivariate extreme value distributions, J. Multivariate Anal. 116 (2013) 109–129. C. Genest, L.-P. Rivest, A characterization of Gumbel’s family of extreme value distributions, Statist. Probab. Lett. 8 (1989) 207–211. P. Georges, A.-G. Lamy, E. Nicolas, G. Quibel, T. Roncalli, Multivariate survival modelling: a unified approach with copulas, 2001. http://dx.doi.org/10.2139/ssrn.1032559. M. Hofert, Efficiently sampling nested Archimedean copulas, Comput. Statist. Data Anal. 55 (2011) 57–70. D. Hofmann, Characterization of the D-norm corresponding to a multivariate extreme value distribution, Ph.D. Thesis, Bayerische Julius–Maximilians– Universität Würzburg, Germany, 2009. X. Huang, Statistics of bivariate extreme values, Ph.D. Thesis, Tinbergen Institute Research Series, The Netherlands, 1992. H. Joe, T. Hu, Multivariate distributions from mixtures of max-infinitely divisible distributions, J. Multivariate Anal. 57 (1996) 240–265. M. Larsson, J. Nešlehová, Extremal behavior of Archimedean copulas, Adv. Appl. Probab. 43 (2011) 195–216. H. Li, Orthant tail dependence of multivariate extreme value distributions, J. Multivariate Anal. 100 (2009) 243–256. J.-F. Mai, M. Scherer, A tractable multivariate default model based on a stochastic time-change, Int. J. Theor. Appl. Finance 12 (2009) 227–249. A.W. Marshall, I. Olkin, Families of multivariate distributions, J. Amer. Statist. Assoc. 83 (1988) 834–841. A.J. McNeil, R. Frey, P. Embrechts, Quantitative Risk Management: Concepts, Techniques and Tools, Princeton University Press, Princeton, NJ, 2005. A.J. McNeil, J. Nešlehová, Multivariate Archimedean copulas, d-monotone functions and ℓ1 -norm symmetric distributions, Ann. Statist. 37 (2009) 3059–3097. A.J. McNeil, J. Nešlehová, From Archimedean to Liouville copulas, J. Multivariate Anal. 101 (2010) 1772–1790. R. Mesiar, V. Jágr, d-dimensional dependence functions and Archimax copulas, Fuzzy Sets and Systems 228 (2013) 78–87. P.M. Morillas, A characterization of absolutely monotonic (∆) functions of a fixed order, Publ. Inst. Math. (Beograd) (N.S.) 78 (92) (2005) 93–105. S. Nadarajah, Simulation of multivariate extreme values, J. Stat. Comput. Simul. 62 (1999) 395–410. R.B. Nelsen, An Introduction to Copulas, second ed., Springer, New York, 2006. S.I. Resnick, Extreme Values, Regular Variation and Point Processes, Springer, New York, 2008. P. Ressel, Homogeneous distributions—and a spectral representation of classical mean values and stable tail dependence functions, J. Multivariate Anal. 117 (2013) 246–256. J. Segers, Max-stable models for multivariate extremes, REVSTAT—Stat. J. 10 (2012) 61–82. A.G. Stephenson, Simulating multivariate extreme value distributions of logistic type, Extremes 6 (2003) 49–59. D.V. Widder, The Laplace Transform, in: Princeton Mathematical Series, vol. 6, Princeton University Press, Princeton, NJ, 1941.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.