Binomial Matrices

Share Embed


Descripción

Advances in Computational Mathematics 14: 379–391, 2001.  2001 Kluwer Academic Publishers. Printed in the Netherlands.

Binomial matrices Geoff Boyd a , Charles A. Micchelli b,∗ , Gilbert Strang c and Ding-Xuan Zhou d,∗∗ a New Transducers Limited, 37 Ixworth Place, London SW3 3QH, UK

E-mail: [email protected] b Department of Mathematics and Statistics, State University of New York at Albany, Albany,

NY 12222, USA E-mail: [email protected], [email protected] c Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA E-mail: [email protected] d Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong E-mail: [email protected]

Received 15 October 2000; accepted 4 May 2001 Communicated by Y. Xu

Every s × s matrix A yields a composition map acting on polynomials on Rs . Specifically, for every polynomial p we define the mapping CA by the formula (CA p)(x) := p(Ax), x ∈ Rs . For each nonnegative integer n, homogeneous polynomials of degree n form an invariant subspace for CA . We let A(n) be the matrix representation of CA relative to the monomial basis and call A(n) a binomial matrix. This paper studies the asymptotic behavior of A(n) as n → ∞. The special case of 2 × 2 matrices A with the property that A2 = I corresponds to discrete Taylor series and motivated our original interest in binomial matrices. Keywords: binomial matrix, homogeneous polynomial, permanents, Bernstein polynomials, Krawtchouk polynomials, de Casteljau subdivision AMS subject classification: 15A, 41A

1.

Introduction

This paper deals with the following matrix theoretic construction. We begin with an s × s matrix A. Let (Rs ) be the space of all polynomials on Rs and consider the composition map CA defined by the equation (CA p)(x) := p(Ax),

  x ∈ Rs , p ∈ Rs .

∗ Supported in part by the US National Science Foundation under grant DMS-9504780, DMS-9973427

and by the Wavelets Strategic Research Programme, National University of Singapore, under a grant from the National Science and Technology Board and the Ministry of Education, Singapore. ∗∗ Corresponding author. Supported by Research Grants Council of Hong Kong under grant # 9040463.

380

G. Boyd et al. / Binomial matrices

For each n ∈ Z+ , the space Hn (Rs ) of homogeneous polynomials of degree n on Rs is an invariant subspace of CA . We let A(n) be the matrix representation of CA relative to the monomial basis for Hn (Rs ). To identify the elements of A(n) , we define x = (x1 , . . . , xs ), α = (α1 , . . . , αs ) ∈ Zs+ , |α| := α1 + · · · + αs , n := {α: |α| = n} and mα (x) := x α := x1α1 · · · xsαs ,

α ∈ n , x ∈ Rs .

Therefore, the d × d matrix A(n) = (A(n) αβ : α, β ∈ n ) has the property that mα (Ax) =



β A(n) αβ x ,

x ∈ Rs ,

β∈n

where   d = dim Hn Rs =



 n+s −1 . s−1

The multivariate binomial theorem allows for the identification of A(n) . For this reason, we refer to the matrix A(n) as a binomial matrix. Our interest in binomial matrices started with the special case s = 2 and A = B, where   1 1 B := . 1 −1 The matrix B (n) appears in the work of Haddad [6] and comes from rescaling the rows and columns of Hermite matrices, which were the object of study in [1,6,7]. The columns of B (n) have, to us, the appearance of a discrete Taylor series and therefore led us to the study of their asymptotic behavior as n → ∞. It is precisely this issue that concerns us here, namely the asymptotic behavior of binomial matrices A(n) for an arbitrary s × s matrix A. The matrix B (n) is related to Krawtchouk polynomials [9,12] which are important in coding theory [9] and their asymptotic behavior has recently been investigated in [8] from a different perspective. Binomial matrices also appeared in the study of N-widths of Hilbert spaces of holomorphic functions on unitarily invariant domains in Cs . In a Hilbert space the eigenvalues of a compact integral operator identify the N-widths and the unitary invariance of the domain reduces the operator to the composition map CA for an appropriate matrix A [10]. In the analysis presented in [10], it sufficed to know when A is diagonalizable with eigenvalues λ1 , . . . , λs that the eigenvalues of A(n) are the mα (λ), α ∈ n , where λ := (λ1 , . . . , λs ). Binomial matrices have the following important property. For any s × s matrices M and N we have that (MN)(n) = M (n) N (n) .

G. Boyd et al. / Binomial matrices

381

Binomial matrices have made their presence known in subdivision. A modification of the de Casteljau subdivision for Bernstein–Bézier curves led to the binomial matrices corresponding to the two 2 × 2 matrices     1 0 1−x x , , 1−x x 0 1 where x ∈ [0, 1]. This is described in detail in [11, (1.71)]. Binomial matrices play a central role in the study of the algebraic properties of B-patches as well as the analysis of B-patch subdivision [4]. For these reasons they are prominent in [11] where their relationship to “blossoming” is explained. In particular, we can obtain an explicit representation for the elements of A(n) as the permanent of a certain matrix. Specifically, we form the n × n matrix M by repeating the ith row of A αi times and the j th column βj times, i, j = 1, 2, . . . , s, to obtain the formula A(n) αβ =

1 per M, β!

[11, p. 28]. Alternative formulas for the elements of A(n) shall be provided here. Recently, binomial matrices arise in questions related to multidimensional refinement equations [2,3].

2.

A motivating example To motivate our line of investigation we begin with the 2 × 2 matrix   1 1 B= . 1 −1

(2.1)

The j th row of the corresponding binomial matrices B (n) = (Bj(n) k ), j, k = 0, 1, . . . , n, is defined by n 

n−k k n−j Bj(n) (x1 − x2 )j , k x1 x2 = (x1 + x2 )

(x1 , x2 ) ∈ R2 .

k=0

Alternatively, Bj(n) k = Pk (j ; n), where Pk (x; n) is the Krawtchouk polynomial, see [12, p. 36; 9, p. 151]. Starting with the observation that B = LDU, where



 1 0 L := , 1 1



 1 0 D := , 0 −2

(2.2) 

 1 1 U := , 0 1

382

G. Boyd et al. / Binomial matrices

we obtain that B (n) = L(n) D (n) U (n) , where L(n) is the (n + 1) × (n + 1) lower triangular Pascal matrix defined by   j (n) , j, k = 0, 1, . . . , n, Lj k = k D (n) is the diagonal matrix diag{1, −2, . . . , (−2)n }, and U (n) the upper triangular matrix obtained by permuting the rows and columns of L(n) , that is,   n−j (n) , j, k = 0, 1, . . . , n. Uj k = n−k Theorem 2.1. For every n ∈ Z+ B (n) = L(n) D (n)U (n) .

(2.3)

Proof. This result follows immediately from (2.2) and the previously mentioned properties of binomial matrices. Alternatively we can argue in the following manner. For j, k = 0, 1, . . . , n, we have that   n     (n) (n) (n)  j n−l (−2)l L D U jk = l n−k l=0

and thus for any x ∈ R

  n n  n     (n) (n) (n)  k  j n−l l L D U jkx = (−2) xk . l n−k k=0

k=0 l=0

Interchanging the indices k and l establishes that  n  j   n   n−l   (n) (n) (n)  k  j L D U jkx = (−2)l x k−l x l l n−k k=0

l=0

k=l

j  = (−2x)l (1 + x)n−l = (1 − x)j (1 + x)n−j , l j

l=0

thereby confirming theorem 2.1.



the We observe that √ B 2 = 2I and consequently (B (n) )2 = 2n I . Likewise, √ k since √ n−k (n) eigenvalues of B are ± 2 we conclude that the eigenvalues of B are ( 2 ) (− 2 ) , k = 0, 1, . . . , n. The asymptotic behavior of the binomial matrix B (n) as n → ∞ is more challenging. Let us begin this analysis with some useful formulas. For j, k = 0, 1, . . . , n, we see that   1 (k) (−2n)k j (n) Qkn , (2.4) Bj k = Pj n (0) = k! k! n

G. Boyd et al. / Binomial matrices

383

where Pj n and Qkn are the polynomials defined by the equations Pj n (x) = (1 − x)j (1 + x)n−j ,

x ∈ R,

and Qkn (x) = (−2)

−k

 

 k−i−1  k i−1   p

q k i x− 1−x− , (−1) i n q=0 n i=0 p=0

It follows from equation (2.4) that 

(n) = Bkj

max{0,j +k−n}imin{j,k}

x ∈ R.

(2.5)

    k n−k (−1)i . i j −i

(2.6)

For a fixed k ∈ Z+ , we derive from equation (2.5) that   1 k , x ∈ R, lim Qkn (x) = x − n→∞ 2

(2.7)

uniformly on every bounded subinterval of R. Therefore, we conclude for any k ∈ Z+ that   k! 1 k j (n) − B − = 0. (2.8) lim max n→∞ 0j n (−2n)k j k n 2 Alternatively, we may express the asymptotic behavior of B (n) in terms of the linear functionals   n  j (n) , (2.9) Bkj f Tkn f = n j =0 where f ∈ C[0, 1] and k = 0, 1, . . . , n. In the next theorem we use the modulus of continuity ω(f, h) of f defined by

ω(f, h) := sup f (x + t) − f (x)|: |t|  h, x, x + t ∈ [0, 1] . Theorem 2.2. If k ∈ Z+ , f ∈ C k [0, 1] and n  k then  √  (−1)k nk 2k−n Tkn (f ) − f (k) (1/2)  3 ω f (k) , 1/ n . 2 Proof.

The first step in the proof is to introduce the positive linear functional   1/n   1/n  n−k  nk  n − k j + t1 + · · · + tk dt1 . . . dtk . ... f Lkn f = n−k j 2 n 0 0 j =0

We claim for f ∈ C k [0, 1] that (−1)k nk 2k−n Tkn f = Lkn f (k) .

(2.10)

384

G. Boyd et al. / Binomial matrices

We give two proofs for this equation. The first follows by substituting equation (2.6) directly into the definition of Tkn . Specifically, we have that i+n−k k      n − k  j  k i (−1) f Tkn f = i j −i n j =i i=0      n−k  k    i j n−k k + . (−1)i f = j i n n j =0 i=0 which can be rewritten as the multidimensional integral appearing in (2.10). The second proof of equation (2.10) proceeds differently. For every real ρ, we define the function fρ by the equation fρ (x) = eρx , x ∈ R. Their linear combinations are dense in C k [0, 1] as ρ varies over R. Thus it suffices to verify (2.10) for f = fρ and this can be done by direct computation from the formula   ρ/n  nk  −1 k ρ/n n−k e . (2.11) Lkn fρ = n−k 1 + e 2 ρ To prove the theorem, we consider the monomials defined by the equations m0 (x) = 1, m1 (x) = x and m2 (x) = x 2 for x ∈ R. Differentiating both sides of equation (2.11) and evaluating the resulting expressions at ρ = 0 gives the equations Lkn m0 = 1, and

Lkn m1 = m1 (1/2)

  3n − 2k 1 + . Lkn m2 = m2 2 12n2

(2.12)

(2.13)

For any nonnegative linear functional L on C[0, 1] such that Lm0 = 1 and Lm1 = 1/2, any δ > 0 and any f ∈ C[0, 1] there holds the inequality    √  Lf − f 1  1 + ρ ω(f ; δ), (2.14) 2 δ where ρ := Lm2 − m2 (1/2). We apply this fact to Lkn and use (2.13) to prove the theorem. 3.



Asymptotic behavior of binomial matrices

In this section we study the asymptotic behavior of the matrices A(n) as n → ∞ for an s × s matrix A. Specifically, we consider the sequence of linear functionals  (n)  β  Aαβ f Tαn f := n β∈ n

G. Boyd et al. / Binomial matrices

385

for any f ∈ C(*s ), where *s is the simplex

*s = x: x · e = 1, x ∈ Rs+ and e = (1, . . . , 1) ∈ Rs . Definition 3.1. Let x = (x1 , . . . , xs ) ∈ Rs , s > 1 and k ∈ Z+ . The moments of x are defined by k x β β α , α ∈ Zs+ , Ikα (x) := β β∈k

where for β = (β1 , . . . , βs ) ∈ k , the multinomials with s > 1 are given by   k! k := , β β! with β! := β1 !β2 ! · · · βs !. Note by the multinomial theorem that Ik0 (x) = (x · e)k which is nonzero for x · e = 0. On the contrary, when x · e = 0 we have that Ikα (x) = 0 whenever |α| < k and Ikα (x) = k!x α for |α| = k. To see this we compute for y → 0  Ikα (x)  k k  y α = x · ey = x · y + o(y) . α! s α∈Z+

Recall the standard notation D for the vector of first derivatives and the notation D α = (∂/∂xi )αi for the αi -derivative in the xi -variable of a function, i = 1, 2, . . . , s. Also, Dy = y · D is the derivative in the “direction” y and for any finite subset K of Rs we let

Dy DK := y∈K

(empty products are set equal to I ). We denote the row vectors of A as a 1 , a 2 , . . . , a s . In what follows it will be important to identify the indices in G := {j : 1 < j  s} for which a j · e = 0 and for this reason we consider the set

J := j : a j · e = 0, 1 < j  s . We use the following notational convention. When s > 1 and γ = (γ2 , . . . , γs ) ∈ Zs−1 + with |γ |  n we define α ∈ s uniquely by setting αi = γi , 1 < i  s, denote Tαn f by Tγ n f and set |γ |A := j ∈J γj (empty sums are set equal to zero). From the vector γ and the matrix A we form the set H γ consisting of the vectors a 4 , 4 ∈ J each repeated γ4 -times and the constant

 γ a4 · e 4 . µγ := 4∈G\J

386

G. Boyd et al. / Binomial matrices

Theorem 3.2. If A is an s × s matrix, s > 1, such that a 1 · e = 0, γ ∈ Zs−1 and + f ∈ C |γ | (*s ), then  1  a n|γ |A γ . Tγ n f = µγ (DH f ) 1 lim n→∞ (a 1 · e)n−|γ | a ·e For the proof of this result we use the definition of A(n) to conclude that A(n) αβ =

 1 β D CA (mα ) (0). β!

To evaluate this derivative we use the following fact. Lemma 3.3 [5]. Let p be a polynomial and f ∈ C ∞ (Rs ). For vectors x, x 1 , . . . , x s ∈ Rs we have that  (DK α p)(0)      D α f x 1 · x, . . . , x s · x , (3.1) p(D) f x 1 · x, . . . , x s · x = α! s α∈Z+

where K α is the set

K α := x 1 , . . . , x 1 , . . . , x s , . . . , x s ,       α1

α = (α1 , . . . , αs ).

αs

Proof of theorem 3.2. We always choose n such that |γ | < n and apply lemma 3.3 to the functions f = mα and p = mβ to obtain that A(n) αβ = where J α is the set

 1 β 1 D CA (mα ) (0) = DJ α (mβ )(0), β! β!



J α := a 1 , . . . , a 1 , . . . , a s , . . . , a s .       α1

αs

Since  α  α DJ α = a 1 · D 1 · · · a s · D s   s   

1 s αj  j  σ j = ... a D σ +···+σ j σ s 1 |σ |=α1

|σ |=αs j =1

we have that A(n) αβ

=

 |σ 2 |=α2

...

  |σ s |=αs

β−

α1  s

j =2



s  σj

j =2

αj σj



  j σ j  1 β−sj=2 σ j a a

G. Boyd et al. / Binomial matrices

and so





Tγ n f =

|σ 1 |=n−|γ |

×

 |σ 2 |=γ2

n − |γ | σ1



387

 1 σ 1 a

   s s  j 

γj  j σ j j =1 σ . ... a f σj n s |σ |=γs j =2

Our hypothesis on f ensures that  s p  1   s |γ | j j    σ 1 j =1 σ j =2 σ = ·D f + o n−|γ | , f n p! n n p=0

n→∞

(3.2)

uniformly for α ∈ n , σ j ∈ αj , j = 1, . . . , s. Let us first consider the case that J = ∅. In this case we conclude that  1   1 |γ |−n a + o(1), n → ∞, Tγ n f = µγ (Bn−|γ | f ) 1 a ·e a ·e uniformly on the simplex *s , where

 β   n  mβ f Bn f := β n β∈ n

is the nth degree Bernstein polynomial of f on *s . We know for any f ∈ C(*s ) that Bn f = f + o(1),

n→∞

uniformly on *s . This fact allows us to conclude that the result is valid when J = ∅. For J = ∅, we have to deal with the fact that

 γ a j · e j = 0. j ∈G

To this end, using (3.2) and the equation  s p 1  j σ ·D = p! j =2

 |θ 2 |+···+|θ s |=p

j j s

(σ j )θ D θ

j =2

θj!

,

we conclude that   n − |γ |   σ a1 Tγ n f = σ |σ |=n−|γ |  |γ |   s  

Iγj θ j (a j ) θ j  −|γ |  σ −p D f +o n × n , j! θ n 2 s p=1 j =2

n → ∞.

|θ |+···+|θ |=p

(3.3)

388

G. Boyd et al. / Binomial matrices

To simplify the right-hand side of (3.3) we observe, whenever 4 ∈ J and |θ 4 | < γ4 it follows that Iγ4θ 4 (a 4 ) = 0. Therefore, if s

Iγj θ j (a j )

θj!

j =2

= 0

(3.4)

then for all 4 ∈ J we obtain |θ 4 |  γ4 . In particular, (3.4) implies that  θ 4  |γ |A . 4∈J

But the range of summation in (3.3) requires that |γ |  p 

p  4 θ  |γ |A . 4=2

Thus the summands in (3.3) corresponding to p = |γ |A require θ 4 = 0 for 4 ∈ J and |θ 4 | = γ4 for 4 ∈ J . These facts give the asymptotic formula   1   (a 1 · e)n−|γ | a + o(1) , n → ∞ µγ (Bn−γ DH γ f ) 1 Tγ n f = n|γ |A a ·e 

which proves the result.

We specialize this result to the case considered in theorem 2.2. For the matrix B in (2.1) we have that e · b1 = 2, e · b2 = 0. Therefore, we conclude that G = J = {2} and (n) µγ = 1. Following our notational convention A(n) αβ in this case is identified with Bkj , where α = (n − k, k) and β = (n − j, j ). Thus, for the function f defined for (x1 , x2 ) ∈ *2 by f (x1 , x2 ) = g(x2 ), where g is defined on [0, 1] we see that Tαn f = Tkn g. Hence, theorem 3.2 says, when g ∈ C k [0, 1] that      k  ∂k 1 1 nk (−1)k−p k! k (k) 1 , = (−1) g . Tkn g = f lim n→∞ 2n−k p!(k − p)! ∂x1p ∂x2k−p 2 2 2 p=0 This equation is also a consequence of theorem 2.2. Corollary 3.4. If A = (Aij ), i, j = 1, 2, . . . , s, is a matrix such that for some constant c = 0, A2 = cI , Ai1 = 1, 1  i  s and f ∈ C |γ | (*s ), where γ = (γ2 , . . . , γs ) ∈ Zs−1 + , then  1 a −n+|γ | |γ | γ , n Tγ n f = DH f lim c n→∞ c where



H γ := a 2 , . . . , a 2 , . . . , a s , . . . , a s .       γ2

γs

G. Boyd et al. / Binomial matrices

389

Proof. Our hypothesis implies that a i ·e = cδ1j , j = 1, 2, . . . , s. Thus G = J , µγ = 1 and the result follows from theorem 3.2.  The additional complexity of the multivariate case conceals from us an error bound of the type presented in theorem 2.2. An improvement of theorem 3.2 would be desirable. Our intention now is to provide a multivariate version of equations (2.7) and (2.8). polynomial. Starting with an s × s matrix A such that s First, we define the limit s−1 A =  0 and a γ ∈ Z we introduce the vectors j 1 + j =1 aˆ j = A−1 j 1 (Aj 2 , . . . , Aj s ),

j = 1, . . . , s,

in Rs−1 and define a polynomial Qγ in H|γ | (Rs ) by setting for any x = (x1 , . . . , xs ) ∈ Rs  n γ 1  j Qγ (x) = xj aˆ . γ ! j =1 Analogous to our notational convention in theorem 3.2 we extend every γ ∈ Zs−1 + , (n) by A . |γ |  n uniquely to a β ∈ n and then denote A(n) αγ αβ  Theorem 3.5. If A is an s × s matrix, s > 1, such that sj =1 Aj 1 = 0 then for every γ ∈ Zs−1 + we have that lim max n−|γ | ρα A(n) αγ − Qγ (α/n) = 0, n→∞ α∈n

where ρα :=

s

−α

Aj 1 j .

j =1

Proof. For the proof of this result we develop yet another formula for the entries of the matrix A(n) . Let δ = (1, 0, . . . , 0) ∈ Zs+ and α, β ∈ n . Use the definition of A(n) and the product rule for differentiation to obtain ρα A(n) αβ

ρα β D CA (mα ) (δ) = = β!

 σ 1 +···+σ s =β

 j σ j |σ

|−1 s

a 1 (αj − k). σ j ! Aj 1 j =1 k=0 j

(3.5)

To write this sum in the desired form we note the following identity. If i1 , . . . , is , i, j are nonnegative integers with i = i1 + · · · + is , then      i  i i 1 ··· s = . (3.6) j1 js j j1 +···+js =j

390

G. Boyd et al. / Binomial matrices

One way to prove this is to check that the sequences appearing in the left and right sides of (3.6) have the same generating function (1 + t)i . Now, with (3.6) in hand we return to (3.5) and sum over the first coordinates of the vectors σ 1 , . . . , σ s and then over the j remaining s − 1 coordinates. Since the first coordinate of the vector A−1 j 1 a is 1, the sum 1 s over the first coordinates of σ , . . . , σ is, by specializing (3.6), seen to be 1. Hence we conclude that  |µj |−1 s 

k 1  j µj αj −|γ | (n) aˆ − . ρα n Aαβ = j! µ n n 1 s j =1 k=0 µ +···+µ =γ

Since we have that Qγ (x) =

 µ1 +···+µs =γ

s

1  j µj |µj | aˆ xj µj ! j =1



the result follows.

Our next result identifies the form of the matrix A for s > 2 for which Qγ is a shifted monomial. Proposition 3.6. If A is an s × s matrix, s > 1, such that A2 = cI for some constant c = 0, Aj 1 = 1, j = 1, . . . , s, and there is an a ∈ Rs−1 such that for any γ ∈ Zs−1 + there is a cγ = 0 such that Qγ = cγ mγ (· − a) on *s then either s = 2 or   1 0 ... 0  1 −1 . . . 0  A= . (3.7) .. ..  ..  ... . . .  1

0

...

−1

Proof. Our requirement on A means for every k = 1, . . . , s that there are nonzero constants ck such that for all (x1 , . . . , xs ) ∈ Rs ,   s s   Aj k xj = ck xk − ak xj for k = 2, . . . , s. j =1

j =1

Thus we conclude for j = 1, . . . , s and k = 2, . . . , s that Akk = ck (1 − ak ),

Aj k = −ck ak ,

j = k.

We now use the fact that the matrix E := A2 − cI is zero. The entry E11 = 0 tells s Ej 1 = 0 implies that us that s1 − k=2 ck ak = c, while for j = 1, 2, . . . , s, the entry 1 − k=2 ck ak + cj = 0. Hence for j = 2, . . . , s, cj = −c and sj =2 aj = (c − 1)/c. Likewise for j = 2, . . . , s the entry Ejj = 0 gives the equation cj aj = 1 − c which implies aj = (c − 1)/c. Therefore we conclude that either s = 2 or c = 1. In the latter case, A takes the form (3.7). This proves the result. 

G. Boyd et al. / Binomial matrices

391

Specializing corollary 3.4 to the matrix appearing in (3.7) we see that J = G and µγ = 1. Given the function f defined on *s we define F by setting, for (x1 , . . . , xs ) ∈ *s , F (x2 , . . . , xs ) = f (x1 , x2 , . . . , xs ). Therefore

  DH γ f (δ) = (−1)|γ | D γ F (0)

and whenever f ∈ C |γ | (*s ) we have (for the matrix in (3.7)) that   lim n|γ | (−1)|γ | Tγ n f = D γ f (0) n→∞

which, in the spirit of the paper, is a discrete Taylor series. Acknowledgements We are grateful to Carl de Boor for helpful discussion concerning the uniqueness in proposition 3.6, to Say Song Goh for assistance in improving the presentation of the paper, to Chris Heil for pointing out the appearance of binomial matrices in [2,3] and to Roderick Wong for discussions on Krawtchouk polynomials. References [1] A.N. Akansu, R.A. Haddad and H. Caglar, The binomial QMF-wavelet transform for multiresolution signal decomposition, IEEE Trans. Signal Process. 41 (1993) 13–19. [2] C. Cabrelli, C. Heil and U. Molter, Accuracy of lattice translates of several multidimensional refinable functions, J. Approx. Theory 95 (1998) 5–52. [3] C. Cabrelli, C. Heil and U. Molter, Accuracy of several multidimensional refinable distributions, J. Fourier Anal. Appl., to appear. [4] A.S. Cavaretta and C.A. Micchelli, Pyramid patches provide potential polynomial paradigms, in: Mathematical Methods in Computer Aided Geometric Design, Vol. 2, eds. T. Lyche and L.L. Schumaker (Academic Press, New York, 1992) pp. 69–100. [5] W. Dahmen and C.A. Micchelli, Translates of multivariate splines, Linear Algebra Appl. 52 (1983) 217–234. [6] R.A. Haddad, A class of orthogonal nonrecursive binomial filters, IEEE Trans. Audio Electroacoust. 19 (1971) 296–304. [7] R.A. Haddad and A.N. Akansu, A new orthogonal transform for signal coding, IEEE Trans. Acoust. Speech Signal Process. 36 (1988) 1404–1411. [8] X.C. Li and R. Wong, A uniform asymptotic expansion for Krawtchouk polynomials, J. Approx. Theory 106 (2000) 155–184. [9] F. MacWilliams and N.J.A. Sloane, The Theory of Error-Correcting Codes (North-Holland, New York, 1977). [10] C.A. Micchelli, N-width for holomorphic functions on the unit ball in n-space, in: Methods of Functional Analysis in Approximation Theory, eds. C.A. Micchelli, D.V. Pai and B.V. Limaye, Bombay, 1985 (Birkhäuser, Basel/Boston, MA, 1986) pp. 195–204. [11] C.A. Micchelli, Mathematical Aspects of Geometric Modeling, CBMS-NSF Regional Conference Series in Applied Mathematics, Vol. 65 (SIAM, Philadelphia, PA, 1995). [12] G. Szegö, Orthogonal Polynomials, 4th ed., American Mathematical Society Colloqium Publications, Vol. 23 (Amer. Math. Soc., Providence, RI, 1975).

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.