The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. There is an enormous scope for fruitful inference using data and assumptions that partially identify population parameters.
This book provides a language and a set of tools for finding bounds on the predictions that social and behavioral scientists can logically make from nonexperimental and experimental data. The economist Charles Manski draws on examples from criminology, demography, epidemiology, social psychology, and sociology as well as economics to illustrate this language and to demonstrate the broad usefulness of the tools. There are many traditional ways to present identification problems in econometrics, sociology, and psychometrics. Some of these are primarily statistical in nature, using concepts such as flat likelihood functions and nondistinct parameter estimates. Manski's strategy is to divorce identification from purely statistical concepts and to present the logic of identification analysis in ways that are accessible to a wide audience in the social and behavioral sciences. In each case, problems are motivated by real examples with real policy importance, the mathematics is kept to a minimum, and the deductions on identifiability are derived giving fresh insights. Manski begins with the conceptual problem of extrapolating predictions from one population to some new population or to the future. He then analyzes in depth the fundamental selection problem that arises whenever a scientist tries to predict the effects of treatments on outcomes. He carefully specifies assumptions and develops his nonparametric methods of bounding predictions. Manski shows how these tools should be used to investigate common problems such as predicting the effect of family structure on children's outcomes and the effect of policing on crime rates. Successive chapters deal with topics ranging from the use of experiments to evaluate social programs, to the use of case-control sampling by epidemiologists studying the association of risk factors and disease, to the use of intentions data by demographers seeking to predict future fertility. The book closes by examining two central identification problems in the analysis of social interactions: the classical simultaneity problem of econometrics and the reflection problem faced in analyses of neighborhood and contextual effects.
Economists have long sought to learn the effect of a "treatment" on some outcome of interest, just as doctors do with their patients. A central practical objective of research on treatment response is to provide decision makers with information useful in choosing treatments. Often the decision maker is a social planner who must choose treatments for a heterogeneous population--for example, a physician choosing medical treatments for diverse patients or a judge choosing sentences for convicted offenders. But research on treatment response rarely provides all the information that planners would like to have. How then should planners use the available evidence to choose treatments? This book addresses key aspects of this broad question, exploring and partially resolving pervasive problems of identification and statistical inference that arise when studying treatment response and making treatment choices. Charles Manski addresses the treatment-choice problem directly using Abraham Wald's statistical decision theory, taking into account the ambiguity that arises from identification problems under weak but justifiable assumptions. The book unifies and further develops the influential line of research the author began in the late 1990s. It will be a valuable resource to researchers and upper-level graduate students in economics as well as other social sciences, statistics, epidemiology and related areas of public health, and operations research.
How cutting-edge economics can improve decision-making methods for doctors Although uncertainty is a common element of patient care, it has largely been overlooked in research on evidence-based medicine. Patient Care under Uncertainty strives to correct this glaring omission. Applying the tools of economics to medical decision making, Charles Manski shows how uncertainty influences every stage, from risk analysis to treatment, and how this can be reasonably confronted. In the language of econometrics, uncertainty refers to the inadequacy of available evidence and knowledge to yield accurate information on outcomes. In the context of health care, a common example is a choice between periodic surveillance or aggressive treatment of patients at risk for a potential disease, such as women prone to breast cancer. While these choices make use of data analysis, Manski demonstrates how statistical imprecision and identification problems often undermine clinical research and practice. Reviewing prevailing practices in contemporary medicine, he discusses the controversy regarding whether clinicians should adhere to evidence-based guidelines or exercise their own judgment. He also critiques the wishful extrapolation of research findings from randomized trials to clinical practice. Exploring ways to make more sensible judgments with available data, to credibly use evidence, and to better train clinicians, Manski helps practitioners and patients face uncertainties honestly. He concludes by examining patient care from a public health perspective and the management of uncertainty in drug approvals. Rigorously interrogating current practices in medicine, Patient Care under Uncertainty explains why predictability in the field has been limited and furnishes criteria for more cogent steps forward.
This book is a full-scale exposition of Charles Manski's new methodology for analyzing empirical questions in the social sciences. He recommends that researchers first ask what can be learned from data alone, and then ask what can be learned when data are combined with credible weak assumptions. Inferences predicated on weak assumptions, he argues, can achieve wide consensus, while ones that require strong assumptions almost inevitably are subject to sharp disagreements. Building on the foundation laid in the author's Identification Problems in the Social Sciences (Harvard, 1995), the book's fifteen chapters are organized in three parts. Part I studies prediction with missing or otherwise incomplete data. Part II concerns the analysis of treatment response, which aims to predict outcomes when alternative treatment rules are applied to a population. Part III studies prediction of choice behavior. Each chapter juxtaposes developments of methodology with empirical or numerical illustrations. The book employs a simple notation and mathematical apparatus, using only basic elements of probability theory.
Combining game theory with unprecedented data, this book analyzes how divided party Presidents use threats and vetoes to wrest policy concessions from a hostile congress.
The most crucial choice a high school graduate makes is whether to attend college or to go to work. Here is the most sophisticated study of the complexities behind that decision. Based on a unique data set of nearly 23,000 seniors from more than 1,300 high schools who were tracked over several years, the book treats the following questions in detail: Who goes to college? Does low family income prevent some young people from enrolling, or does scholarship aid offset financial need? How important are scholastic aptitude scores, high school class rank, race, and socioeconomic background in determining college applications and admissions? Do test scores predict success in higher education? Using the data from the National Longitudinal Study of the Class of 1972, the authors present a set of interrelated analyses of student and institutional behavior, each focused on a particular aspect of the process of choosing and being chosen by a college. Among their interesting findings: most high school graduates would be admitted to some four-year college of average quality, were they to apply; applicants do not necessarily prefer the highest-quality school; high school class rank and SAT scores are equally important in college admissions; federal scholarship aid has had only a small effect on enrollments at four-year colleges but a much stronger effect on attendance at two-year colleges; the attention paid to SAT scores in admissions is commensurate with the power of the scores in predicting persistence to a degree. This clearly written book is an important source of information on a perpetually interesting topic.
The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. There is an enormous scope for fruitful inference using data and assumptions that partially identify population parameters.
Manski argues that public policy is based on untrustworthy analysis. Failing to account for uncertainty in an uncertain world, policy analysis routinely misleads policy makers with expressions of certitude. Manski critiques the status quo and offers an innovation to improve both how policy research is conducted and how it is used by policy makers.
This book is a full-scale exposition of Charles Manski's new methodology for analyzing empirical questions in the social sciences. He recommends that researchers first ask what can be learned from data alone, and then ask what can be learned when data are combined with credible weak assumptions. Inferences predicated on weak assumptions, he argues, can achieve wide consensus, while ones that require strong assumptions almost inevitably are subject to sharp disagreements. Building on the foundation laid in the author's Identification Problems in the Social Sciences (Harvard, 1995), the book's fifteen chapters are organized in three parts. Part I studies prediction with missing or otherwise incomplete data. Part II concerns the analysis of treatment response, which aims to predict outcomes when alternative treatment rules are applied to a population. Part III studies prediction of choice behavior. Each chapter juxtaposes developments of methodology with empirical or numerical illustrations. The book employs a simple notation and mathematical apparatus, using only basic elements of probability theory.
The author draws on examples from a range of disciplines to provide social and behavioural scientists with a toolkit for finding bounds when predicting behaviours based upon nonexperimental and experimental data.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.