This open access book provides an introduction and an overview of learning to quantify (a.k.a. “quantification”), i.e. the task of training estimators of class proportions in unlabeled data by means of supervised learning. In data science, learning to quantify is a task of its own related to classification yet different from it, since estimating class proportions by simply classifying all data and counting the labels assigned by the classifier is known to often return inaccurate (“biased”) class proportion estimates. The book introduces learning to quantify by looking at the supervised learning methods that can be used to perform it, at the evaluation measures and evaluation protocols that should be used for evaluating the quality of the returned predictions, at the numerous fields of human activity in which the use of quantification techniques may provide improved results with respect to the naive use of classification techniques, and at advanced topics in quantification research. The book is suitable to researchers, data scientists, or PhD students, who want to come up to speed with the state of the art in learning to quantify, but also to researchers wishing to apply data science technologies to fields of human activity (e.g., the social sciences, political science, epidemiology, market research) which focus on aggregate (“macro”) data rather than on individual (“micro”) data.
Since its birth, the field of Probabilistic Logic Programming has seen a steady increase of activity, with many proposals for languages and algorithms for inference and learning. This book aims at providing an overview of the field with a special emphasis on languages under the Distribution Semantics, one of the most influential approaches. The book presents the main ideas for semantics, inference, and learning and highlights connections between the methods. Many examples of the book include a link to a page of the web application http://cplint.eu where the code can be run online. This 2nd edition aims at reporting the most exciting novelties in the field since the publication of the 1st edition. The semantics for hybrid programs with function symbols was placed on a sound footing. Probabilistic Answer Set Programming gained a lot of interest together with the studies on the complexity of inference. Algorithms for solving the MPE and MAP tasks are now available. Inference for hybrid programs has changed dramatically with the introduction of Weighted Model Integration. With respect to learning, the first approaches for neuro-symbolic integration have appeared together with algorithms for learning the structure for hybrid programs. Moreover, given the cost of learning PLPs, various works proposed language restrictions to speed up learning and improve its scaling.
This open access book provides an introduction and an overview of learning to quantify (a.k.a. “quantification”), i.e. the task of training estimators of class proportions in unlabeled data by means of supervised learning. In data science, learning to quantify is a task of its own related to classification yet different from it, since estimating class proportions by simply classifying all data and counting the labels assigned by the classifier is known to often return inaccurate (“biased”) class proportion estimates. The book introduces learning to quantify by looking at the supervised learning methods that can be used to perform it, at the evaluation measures and evaluation protocols that should be used for evaluating the quality of the returned predictions, at the numerous fields of human activity in which the use of quantification techniques may provide improved results with respect to the naive use of classification techniques, and at advanced topics in quantification research. The book is suitable to researchers, data scientists, or PhD students, who want to come up to speed with the state of the art in learning to quantify, but also to researchers wishing to apply data science technologies to fields of human activity (e.g., the social sciences, political science, epidemiology, market research) which focus on aggregate (“macro”) data rather than on individual (“micro”) data.
This book constitutes the refereed proceedings of the 25th European Conference on Information Retrieval Research, ECIR 2003, held in Pisa, Italy, in April 2003. The 31 revised full papers and 16 short papers presented together with two invited papers were carefully reviewed and selected from 101 submissions. The papers are organized in topical sections on IR and the Web; retrieval of structured documents; collaborative filtering and text mining; text representation and natural language processing; formal models and language models for IR; machine learning and IR; text categorization; usability, interactivity, and visualization; and architectural issues and efficiency.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.