This book provides a self-contained introduction to shrinkage estimation for matrix-variate normal distribution models. More specifically, it presents recent techniques and results in estimation of mean and covariance matrices with a high-dimensional setting that implies singularity of the sample covariance matrix. Such high-dimensional models can be analyzed by using the same arguments as for low-dimensional models, thus yielding a unified approach to both high- and low-dimensional shrinkage estimations. The unified shrinkage approach not only integrates modern and classical shrinkage estimation, but is also required for further development of the field. Beginning with the notion of decision-theoretic estimation, this book explains matrix theory, group invariance, and other mathematical tools for finding better estimators. It also includes examples of shrinkage estimators for improving standard estimators, such as least squares, maximum likelihood, and minimum risk invariant estimators, and discusses the historical background and related topics in decision-theoretic estimation of parameter matrices. This book is useful for researchers and graduate students in various fields requiring data analysis skills as well as in mathematical statistics.
This book provides a self-contained introduction of Stein/shrinkage estimation for the mean vector of a multivariate normal distribution. The book begins with a brief discussion of basic notions and results from decision theory such as admissibility, minimaxity, and (generalized) Bayes estimation. It also presents Stein's unbiased risk estimator and the James-Stein estimator in the first chapter. In the following chapters, the authors consider estimation of the mean vector of a multivariate normal distribution in the known and unknown scale case when the covariance matrix is a multiple of the identity matrix and the loss is scaled squared error. The focus is on admissibility, inadmissibility, and minimaxity of (generalized) Bayes estimators, where particular attention is paid to the class of (generalized) Bayes estimators with respect to an extended Strawderman-type prior. For almost all results of this book, the authors present a self-contained proof. The book is helpful for researchers and graduate students in various fields requiring data analysis skills as well as in mathematical statistics.
This book provides a self-contained introduction of mixed-effects models and small area estimation techniques. In particular, it focuses on both introducing classical theory and reviewing the latest methods. First, basic issues of mixed-effects models, such as parameter estimation, random effects prediction, variable selection, and asymptotic theory, are introduced. Standard mixed-effects models used in small area estimation, known as the Fay-Herriot model and the nested error regression model, are then introduced. Both frequentist and Bayesian approaches are given to compute predictors of small area parameters of interest. For measuring uncertainty of the predictors, several methods to calculate mean squared errors and confidence intervals are discussed. Various advanced approaches using mixed-effects models are introduced, from frequentist to Bayesian approaches. This book is helpful for researchers and graduate students in fields requiring data analysis skills as well as in mathematical statistics.
This book provides a self-contained introduction to shrinkage estimation for matrix-variate normal distribution models. More specifically, it presents recent techniques and results in estimation of mean and covariance matrices with a high-dimensional setting that implies singularity of the sample covariance matrix. Such high-dimensional models can be analyzed by using the same arguments as for low-dimensional models, thus yielding a unified approach to both high- and low-dimensional shrinkage estimations. The unified shrinkage approach not only integrates modern and classical shrinkage estimation, but is also required for further development of the field. Beginning with the notion of decision-theoretic estimation, this book explains matrix theory, group invariance, and other mathematical tools for finding better estimators. It also includes examples of shrinkage estimators for improving standard estimators, such as least squares, maximum likelihood, and minimum risk invariant estimators, and discusses the historical background and related topics in decision-theoretic estimation of parameter matrices. This book is useful for researchers and graduate students in various fields requiring data analysis skills as well as in mathematical statistics.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.