Improving Efficiency by Shrinkage

July 13, 2017 | Autor: Marvin Gruber | Categoría: Statistics
Share Embed


Descripción

This article was downloaded by: [Umeå University Library] On: 07 October 2014, At: 08:37 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Technometrics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/utch20

Improving Efficiency by Shrinkage Felix Famoye

a

a

Central Michigan University Published online: 12 Mar 2012.

To cite this article: Felix Famoye (1999) Improving Efficiency by Shrinkage, Technometrics, 41:2, 176-176 To link to this article: http://dx.doi.org/10.1080/00401706.1999.10485650

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

BOOK REVIEWS

Downloaded by [Umeå University Library] at 08:37 07 October 2014

176

parameter is estimated from the data. For clarity of exposition, Chapter 7 restricts its attention to the special case in which the null hypothesis is that there is no association between the independent and dependent variables. Introduced here is the concept of using order-selection criteria of a Fourier series to test the lack-of-fit hypothesis, Chapter 8 looks at the more general case. Chapter 9 examines several generalizations. Topics include approaches to higher-dimensional regression lack-of-tit tests, additive models, testing curves between treatment groups for equality, and tests of white noise. Chapter IO gives some examples. There is a good deal of information in this book that is actionable by the practitioner. The nonparametric smoothing techniques described are of real practical value. The two-dimensional lack-of-fit results are interesting and a natural starting point for the exposition. In some areas of application, they may be of utility in their own right. From a multivariate model-building point of view, the extensions of Chapter 9 are essential. The coverage here seems somewhat slight, falling off at a critical juncture for those who might wish to apply the techniques. A discussion of additive models in Section 9.4 is interesting as far as it goes. For instance, a discussion of the relationship of these approaches to that taken by general additive models would have been useful. Additional examples on this topic beyond the one in Chapter 10 would have been helpful. Finally, the availability of software written in a standard language such as S-PLUS or Matlab would have been a real plus. William ALEXANDER First Union Capital Markets

Improving

Efficiency by Shrinkage, by Marvin H. .I. 1998, ISBN O8247-0156-9, xii + 632 pp., $195.

GRUBER, New York: Marcel Dekker,

The organization of the book is as follows: Part I

Introduction to Shrinkage Estimators I. Introduction II. The Stein Paradox III. The Ridge Estimators of Hoer1 and Kennard Part II Estimation for a Single Linear Model IV. James-Stein Estimators for a Single Linear Model V. Ridge Estimators from Different Points of View VI. Improving the James-Stein Estimator: The Positive Parts Part III Other Linear Model Setups VII. The Simultaneous Estimation Problem VIII. The Precision of Individual Estimators LX. The Multivariate Linear Model X. Other Linear Model Setups XI. Summary and Conclusion Chapter I introduces the James-Stein and the ridge estimators. This chapter contains a historical survey of the literature on these estimators. Using both the Bayesian and the frequentist approach, the author formulates the James-Stein estimator in Chapter II. In Chapter III, different types of ridge estimators are obtained. This chapter contains a summary of some Monte Carlo simulations on ridge estimators. The James-Stein estimator is formulated for a single linear model in Chapter IV, Four methods are used to derive ridge estimators in Chapter V. In Chapter VI, five different positive-parts James-Stein estimators are defined. The author considers the mean of multivariate normal distribution and the parameters of a single linear regression model. Among the materials in Chapter VII are the definitions of three JamesStein-type estimators (Rae, Wind, and Dempster). The properties of the risks of these estimators are discussed in Chapter VIII. The results in Chapters VII and VIII are extended to the multivariate linear model in Chapter IX. In Chapter X, other linear models are considered for the estimators discussed in Chapters I-IX. Chapter XI is a summary of what the book covers. The author states in the preface that “The purpose of this book is to give a unified treatment of these two kinds of estimators from both a Bayesian and a non-Bayesian (frequentist) point of view.” The two kinds of estimators referred to are the ridge and the James-Stein estimators. The statement by the author is an accurate description of the book’s purpose. TECHNOMETRICS,

MAY 1999, VOL. 41, NO. 2

The book is well organized, and it contains very useful materials, Each chapter, except Chapter XI, contains some exercises, These exercises will help readers to check their own understanding of the materials presented. These are a meaningful part of the book. Except for Chapters I and XI, every chapter ends with a summary, a nice feature in the book. There is an author index and a subject index. The author’s effort in presenting the historical survey in Chapter I is commendable. In addition to the well-written theoretical development in the book, this book can also serve as a reference for the ridge and the James-Stein estimators. An additional nice feature of the book is the inclusion of some computer codes used to generate some of the numerical examples. It would have been much better if a floppy disk containing the computer codes had been provided with the book. I do have a few negative comments, however. The book appears to have been printed photographically from the typing done by the author. A desktop publisher could have done a better job (and a spell checker could have been used). There are many editorial errors (misspelled words, words missing, misplaced periods, inappropriate gaps between words, tables missing, etc.) that distract readers from the good quality of the book. For examples, (a) on page 6, Example 1.2.1 refers to Table 1.2.1, which is not provided; (b) on page 40, the last paragraph is repeated on page 41 starting at line 1; and (c) in Chapter VI, there is Subsection 6.3.1 with the title “The Multivariate Normal Distribution” (p, 337). The title of Subsection 6.3.2, which should have been “The Single Linear Model,” seems to be missing. These numerous errors will limit the use of the book as a textbook for a graduate course. The author has done a great job in writing the materials. The book is more on the theoretical and mathematical side than on the application side. The estimators and the results discussed in the book will be useful to practitioners in the physical, chemical, and engineering sciences. The book is a very good addition to the literature on James-Stein and ridge estimators. Felix FAMOYE Central Michigan University

Statistical Inference Based on the Likelihood, by Adelchi AZZALINI, London: Chapman & Hall, 1996, ISBN 0-412-60650-X, x + 341, $64.95 (currently distributed by CRC Press, l-800-272-7737). This book, based on notes for a second-year graduate-level course in statistics at the University of Padua, represents a condensation of concepts and methods that appear to the author “to form the core of the discipline in its present state.” The text deliberately eschews measure-theoretic niceties. A principal aim of the book is to show “how the main body of currently used statistical techniques can be generated from a few key concepts, in particular the likelihood” (p. ix). Bayesian theory is completely absent, as is any treatment of such subjects as nonparametric, computer-intensive (e.g., bootstrap), or multivariate methods. In an “Introduction and Overview,” the author surveys briefly the relative roles of inference, data description, sampling, and the probability-model roots of statistics. Point and interval estimates are motivated via maximum and relative likelihood, and the book’s allegiance to a frequentist-based viewpoint is reinforced. Chapter 2 on likelihood begins with a fast-paced lead-in to the notions of parametric statistical models, sample spaces, location-scale families, and identifiability, before settling down to a clear discussion of likelihood, illustrating the concept with iid (“srs”-simple random sample-in the author’s jargon) involving normal and uniform distributions, as well as Markov-chain and censored-survival-times examples. Two versions of the likelihood principle are motivated. Then sufficiency, minimal sufficiency, the Neyman factorization characterization, and exponential and regular exponential families are covered in some detail. The author initially defines sufficiency in terms of an equivalence between statistic-induced and likelihood-induced partitions of the sample space, to emphasize the role of the likelihood, then proves Neyman factorization, and winds up with the more usual conditionality definition. Minimally sufficient statistics are defined as those inducing the likelihood partition. [Quibble: The author is incorrect in drawing the conclusion from this definition that “minimal sufficient statistic[s] [are] essentially unique” (p. 37). Cf. Romano-Siegel

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.