This book constitutes the refereed proceedings of the First European Symposium on Principles of Data Mining and Knowledge Discovery, PKDD '97, held in Trondheim, Norway, in June 1997. The volume presents a total of 38 revised full papers together with abstracts of one invited talk and four tutorials. Among the topics covered are data and knowledge representation, statistical and probabilistic methods, logic-based approaches, man-machine interaction aspects, AI contributions, high performance computing support, machine learning, automated scientific discovery, quality assessment, and applications.
This book constitutes the refereed proceedings of the Third European Conference on Principles and Practice of Knowledge Discovery in Databases, PKDD'99, held in Prague, Czech Republic in September 1999. The 28 revised full papers and 48 poster presentations were carefully reviewed and selected from 106 full papers submitted. The papers are organized in topical sections on time series, applications, taxonomies and partitions, logic methods, distributed and multirelational databases, text mining and feature selection, rules and induction, and interesting and unusual issues.
Pat Langley is an Associate Professor in the Department of Information and Computer Science at the University of California, Irvine. Herbert Simon is a Professor in the Departments of Psychology, Computer Science, and Philosophy at Carnegie-Mellon University. Gary L. Bradshaw is an Assistant Professor in the Department of Psychology and Institute of Cognitive Science at the University of Colorado, Boulder. Jan M. Zytkow is an Associate Professor in the Computer Science Department at Wichita State University.
This book constitutes the refereed proceedings of the 4th European Conference on Principles and Practice of Knowledge Discovery in Databases, PKDD 2000, held in Lyon, France in September 2000. The 86 revised papers included in the book correspond to the 29 oral presentations and 57 posters presented at the conference. They were carefully reviewed and selected from 147 submissions. The book offers topical sections on new directions, rules and trees, databases and reward-based learning, classification, association rules and exceptions, instance-based discovery, clustering, and time series analysis.
This book constitutes the refereed proceedings of the Second European Symposium on Principles of Data Mining and Knowledge Discovery, PKDD '98, held in Nantes, France, in September 1998. The volume presents 26 revised papers corresponding to the oral presentations given at the conference; also included are refereed papers corresponding to the 30 poster presentations. These papers were selected from a total of 73 full draft submissions. The papers are organized in topical sections on rule evaluation, visualization, association rules and text mining, KDD process and software, tree construction, sequential and spatial data mining, and attribute selection.
This book constitutes the refereed proceedings of the First European Symposium on Principles of Data Mining and Knowledge Discovery, PKDD '97, held in Trondheim, Norway, in June 1997. The volume presents a total of 38 revised full papers together with abstracts of one invited talk and four tutorials. Among the topics covered are data and knowledge representation, statistical and probabilistic methods, logic-based approaches, man-machine interaction aspects, AI contributions, high performance computing support, machine learning, automated scientific discovery, quality assessment, and applications.
This open access book as one of the fastest-growing areas of research in machine learning, metalearning studies principled methods to obtain efficient models and solutions by adapting machine learning and data mining processes. This adaptation usually exploits information from past experience on other tasks and the adaptive processes can involve machine learning approaches. As a related area to metalearning and a hot topic currently, automated machine learning (AutoML) is concerned with automating the machine learning processes. Metalearning and AutoML can help AI learn to control the application of different learning methods and acquire new solutions faster without unnecessary interventions from the user. This book offers a comprehensive and thorough introduction to almost all aspects of metalearning and AutoML, covering the basic concepts and architecture, evaluation, datasets, hyperparameter optimization, ensembles and workflows, and also how this knowledge can be used to select, combine, compose, adapt and configure both algorithms and models to yield faster and better solutions to data mining and data science problems. It can thus help developers to develop systems that can improve themselves through experience. This book is a substantial update of the first edition published in 2009. It includes 18 chapters, more than twice as much as the previous version. This enabled the authors to cover the most relevant topics in more depth and incorporate the overview of recent research in the respective area. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining, data science and artificial intelligence.
Mechanizing hypothesis formation is an approach to exploratory data analysis. Its development started in the 1960s inspired by the question “can computers formulate and verify scientific hypotheses?”. The development resulted in a general theory of logic of discovery. It comprises theoretical calculi dealing with theoretical statements as well as observational calculi dealing with observational statements concerning finite results of observation. Both calculi are related through statistical hypotheses tests. A GUHA method is a tool of the logic of discovery. It uses a one-to-one relation between theoretical and observational statements to get all interesting theoretical statements. A GUHA procedure generates all interesting observational statements and verifies them in a given observational data. Output of the procedure consists of all observational statements true in the given data. Several GUHA procedures dealing with association rules, couples of association rules, action rules, histograms, couples of histograms, and patterns based on general contingency tables are involved in the LISp-Miner system developed at the Prague University of Economics and Business. Various results about observational calculi were achieved and applied together with the LISp-Miner system. The book covers a brief overview of logic of discovery. Many examples of applications of the GUHA procedures to solve real problems relevant to data mining and business intelligence are presented. An overview of recent research results relevant to dealing with domain knowledge in data mining and its automation is provided. Firsthand experiences with implementation of the GUHA method in the Python language are presented.
In het genuanceerde en scherpzinnige boek Navalny over Aleksej Navalny onderzoeken Dollbaum, Lallouet en Noble wat de gevreesde oppositieleider van Poetin betekent voor de toekomst van Rusland. In het boek Navalny van Jan Matti Dollbaum, Morvan Lallouet en Ben Noble staat oppositieleider en activist Aleksej Navalny centraal: de sleutelfiguur die wereldwijde bekendheid verwierf als de eerste realistische opponent van Poetin sinds jaren. Nadat hij in 2020 werd vergiftigd en naar Duitsland werd geëvacueerd, keerde hij in 2021 terug naar Rusland. Voor het oog van de verzamelde wereldpers werd hij bij aankomst onmiddellijk gearresteerd. Voor zijn inspanningen tegen president Poetin won hij in datzelfde jaar de prestigieuze Europese Sacharovprijs. Navalny heeft vele gezichten. Sommigen zien hem als strijder voor de democratie, anderen vinden hem een verrader van zijn moederland, of een nationalist en een racist. In dit boek worden Navalny’s tegenstrijdige kanten belicht en verklaard, de man die zelfs achter de tralies de op één na belangrijkste politieke figuur van Rusland is. Navalny’s politieke leven, zijn baanbrekende anticorruptieonderzoek, zijn ideeën en de verhouding tussen hem en het Kremlin: om het Rusland van nu te kunnen begrijpen, moet je Aleksej Navalny begrijpen. ‘Een briljante en zeer leesbare analyse van een van de meest boeiende Russische politieke figuren.’ — Financial Times ‘Aleksej Navalny is de ware leider van Rusland’ — The New York Times
Observational calculi were introduced in the 1960’s as a tool of logic of discovery. Formulas of observational calculi correspond to assertions on analysed data. Truthfulness of suitable assertions can lead to acceptance of new scientific hypotheses. The general goal was to automate the process of discovery of scientific knowledge using mathematical logic and statistics. The GUHA method for producing true formulas of observational calculi relevant to the given problem of scientific discovery was developed. Theoretically interesting and practically important results on observational calculi were achieved. Special attention was paid to formulas - couples of Boolean attributes derived from columns of the analysed data matrix. Association rules introduced in the 1990’s can be seen as a special case of such formulas. New results on logical calculi and association rules were achieved. They can be seen as a logic of association rules. This can contribute to solving contemporary challenging problems of data mining research and practice. The book covers thoroughly the logic of association rules and puts it into the context of current research in data mining. Examples of applications of theoretical results to real problems are presented. New open problems and challenges are listed. Overall, the book is a valuable source of information for researchers as well as for teachers and students interested in data mining.
Drawing Programs: The Theory and Practice of Schematic Functional Programming describes a diagrammatic (schematic) approach to programming. It introduces a sophisticated tool for programmers who would rather work with diagrams than with text. The language is a complete functional language that has evolved into a representation scheme that is unique. The result is a simple coherent description of the process of modelling with the computer. The experience of using this tool is introduced gradually with examples, small projects and exercises. The new computational theory behind the tool is interspersed between these practical descriptions so that the reasons for the activity can be understood and the activity, in turn, illustrates some elements of the theory Access to the tool, its source code and a set of examples that range from the simple to the complex is free (see www.springer.com/978-1-84882-617-5). A description of the tool’s construction and how it may be extended is also given. The authors’ experience with undergraduates and graduates who have the understanding and skill of a functional language learnt through using schema have also shown an enhanced ability to program in other computer languages. Readers are provided with a set of concepts that will ensure a good robust program design and, what is more important, a path to error free programming.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.