The book provides a comprehensive treatment of statistical inference using permutation techniques. It features a variety of useful and powerful data analytic tools that rely on very few distributional assumptions. Although many of these procedures have appeared in journal articles, they are not readily available to practitioners.
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
This book takes a unique approach to explaining permutation statistics by integrating permutation statistical methods with a wide range of classical statistical methods and associated R programs. It opens by comparing and contrasting two models of statistical inference: the classical population model espoused by J. Neyman and E.S. Pearson and the permutation model first introduced by R.A. Fisher and E.J.G. Pitman. Numerous comparisons of permutation and classical statistical methods are presented, supplemented with a variety of R scripts for ease of computation. The text follows the general outline of an introductory textbook in statistics with chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, completely-randomized analysis of variance, randomized-blocks analysis of variance, simple linear regression and correlation, and the analysis of goodness of fit and contingency. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity, depend only on the observed data, and do not require random sampling. The methods are relatively new in that it took modern computing power to make them available to those working in mainstream research. Designed for an audience with a limited statistical background, the book can easily serve as a textbook for undergraduate or graduate courses in statistics, psychology, economics, political science or biology. No statistical training beyond a first course in statistics is required, but some knowledge of, or some interest in, the R programming language is assumed.
The primary purpose of this textbook is to introduce the reader to a wide variety of elementary permutation statistical methods. Permutation methods are optimal for small data sets and non-random samples, and are free of distributional assumptions. The book follows the conventional structure of most introductory books on statistical methods, and features chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, one-way fully-randomized analysis of variance, one-way randomized-blocks analysis of variance, simple regression and correlation, and the analysis of contingency tables. In addition, it introduces and describes a comparatively new permutation-based, chance-corrected measure of effect size. Because permutation tests and measures are distribution-free, do not assume normality, and do not rely on squared deviations among sample values, they are currently being applied in a wide variety of disciplines. This book presents permutation alternatives to existing classical statistics, and is intended as a textbook for undergraduate statistics courses or graduate courses in the natural, social, and physical sciences, while assuming only an elementary grasp of statistics.
The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, analysis of variance, contingency table analysis, and measures of association and agreement. A non-mathematical approach makes the text accessible to readers of all levels.
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. lly-informed="" audience,="" and="" can="" also="" easily="" serve="" as="" textbook="" in="" graduate="" course="" departments="" such="" statistics,="" psychology,="" or="" biology.="" particular,="" the="" audience="" for="" book="" is="" teachers="" of="" practicing="" statisticians,="" applied="" quantitative="" students="" fields="" medical="" research,="" epidemiology,="" public="" health,="" biology.
The primary purpose of this textbook is to introduce the reader to a wide variety of elementary permutation statistical methods. Permutation methods are optimal for small data sets and non-random samples, and are free of distributional assumptions. The book follows the conventional structure of most introductory books on statistical methods, and features chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, one-way fully-randomized analysis of variance, one-way randomized-blocks analysis of variance, simple regression and correlation, and the analysis of contingency tables. In addition, it introduces and describes a comparatively new permutation-based, chance-corrected measure of effect size. Because permutation tests and measures are distribution-free, do not assume normality, and do not rely on squared deviations among sample values, they are currently being applied in a wide variety of disciplines. This book presents permutation alternatives to existing classical statistics, and is intended as a textbook for undergraduate statistics courses or graduate courses in the natural, social, and physical sciences, while assuming only an elementary grasp of statistics.
This is the second edition of the comprehensive treatment of statistical inference using permutation techniques. It makes available to practitioners a variety of useful and powerful data analytic tools that rely on very few distributional assumptions. Although many of these procedures have appeared in journal articles, they are not readily available to practitioners. This new and updated edition places increased emphasis on the use of alternative permutation statistical tests based on metric Euclidean distance functions that have excellent robustness characteristics. These alternative permutation techniques provide many powerful multivariate tests including multivariate multiple regression analyses.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.