The aim of this book is to discuss various aspects associated with disseminating personal or business data collected in censuses or surveys or copied from administrative sources. The problem is to present the data in such a form that they are useful for statistical research and to provide sufficient protection for the individuals or businesses to whom the data refer. The major part of this book is concerned with how to define the disclosure problem and how to deal with it in practical circumstances.
Statistical disclosure control is the discipline that deals with producing statistical data that are safe enough to be released to external researchers. This book concentrates on the methodology of the area. It deals with both microdata (individual data) and tabular (aggregated) data. The book attempts to develop the theory from what can be called the paradigm of statistical confidentiality: to modify unsafe data in such a way that safe (enough) data emerge, with minimum information loss. This book discusses what safe data, are, how information loss can be measured, and how to modify the data in a (near) optimal way. Once it has been decided how to measure safety and information loss, the production of safe data from unsafe data is often a matter of solving an optimization problem. Several such problems are discussed in the book, and most of them turn out to be hard problems that can be solved only approximately. The authors present new results that have not been published before. The book is not a description of an area that is closed, but, on the contrary, one that still has many spots awaiting to be more fully explored. Some of these are indicated in the book. The book will be useful for official, social and medical statisticians and others who are involved in releasing personal or business data for statistical use. Operations researchers may be interested in the optimization problems involved, particularly for the challenges they present. Leon Willenborg has worked at the Department of Statistical Methods at Statistics Netherlands since 1983, first as a researcher and since 1989 as a senior researcher. Since 1989 his main field of research and consultancy has been statistical disclosure control. From 1996-1998 he was the project coordinator of the EU co-funded SDC project.
This book develops Doukhan/Louhichi's 1999 idea to measure asymptotic independence of a random process. The authors, who helped develop this theory, propose examples of models fitting such conditions: stable Markov chains, dynamical systems or more complicated models, nonlinear, non-Markovian, and heteroskedastic models with infinite memory. Applications are still needed to develop a method of analysis for nonlinear times series, and this book provides a strong basis for additional studies.
Statistical disclosure control is the discipline that deals with producing statistical data that are safe enough to be released to external researchers. This book concentrates on the methodology of the area. It deals with both microdata (individual data) and tabular (aggregated) data. The book attempts to develop the theory from what can be called the paradigm of statistical confidentiality: to modify unsafe data in such a way that safe (enough) data emerge, with minimum information loss. This book discusses what safe data, are, how information loss can be measured, and how to modify the data in a (near) optimal way. Once it has been decided how to measure safety and information loss, the production of safe data from unsafe data is often a matter of solving an optimization problem. Several such problems are discussed in the book, and most of them turn out to be hard problems that can be solved only approximately. The authors present new results that have not been published before. The book is not a description of an area that is closed, but, on the contrary, one that still has many spots awaiting to be more fully explored. Some of these are indicated in the book. The book will be useful for official, social and medical statisticians and others who are involved in releasing personal or business data for statistical use. Operations researchers may be interested in the optimization problems involved, particularly for the challenges they present. Leon Willenborg has worked at the Department of Statistical Methods at Statistics Netherlands since 1983, first as a researcher and since 1989 as a senior researcher. Since 1989 his main field of research and consultancy has been statistical disclosure control. From 1996-1998 he was the project coordinator of the EU co-funded SDC project.
The aim of this book is to discuss various aspects associated with disseminating personal or business data collected in censuses or surveys or copied from administrative sources. The problem is to present the data in such a form that they are useful for statistical research and to provide sufficient protection for the individuals or businesses to whom the data refer. The major part of this book is concerned with how to define the disclosure problem and how to deal with it in practical circumstances.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.