This book is about Granular Computing (GC) - an emerging conceptual and of information processing. As the name suggests, GC concerns computing paradigm processing of complex information entities - information granules. In essence, information granules arise in the process of abstraction of data and derivation of knowledge from information. Information granules are everywhere. We commonly use granules of time (seconds, months, years). We granulate images; millions of pixels manipulated individually by computers appear to us as granules representing physical objects. In natural language, we operate on the basis of word-granules that become crucial entities used to realize interaction and communication between humans. Intuitively, we sense that information granules are at the heart of all our perceptual activities. In the past, several formal frameworks and tools, geared for processing specific information granules, have been proposed. Interval analysis, rough sets, fuzzy sets have all played important role in knowledge representation and processing. Subsequently, information granulation and information granules arose in numerous application domains. Well-known ideas of rule-based systems dwell inherently on information granules. Qualitative modeling, being one of the leading threads of AI, operates on a level of information granules. Multi-tier architectures and hierarchical systems (such as those encountered in control engineering), planning and scheduling systems all exploit information granularity. We also utilize information granules when it comes to functionality granulation, reusability of information and efficient ways of developing underlying information infrastructures.
The book outlines selected projects conducted under the supervision of the author. Moreover, it discusses significant relations between Interactive Granular Computing (IGrC) and numerous dynamically developing scientific domains worldwide, along with features characteristic of the author’s approach to IGrC. The results presented are a continuation and elaboration of various aspects of Wisdom Technology, initiated and developed in cooperation with Professor Andrzej Skowron. Based on the empirical findings from these projects, the author explores the following areas: (a) understanding the causes of the theory and practice gap problem (TPGP) in complex systems engineering (CSE); (b) generalizing computing models of complex adaptive systems (CAS) (in particular, natural computing models) by constructing an interactive granular computing (IGrC) model of networks of interrelated interacting complex granules (c-granules), belonging to a single agent and/or to a group of agents; (c) developing methodologies based on the IGrC model to minimize the negative consequences of the TPGP. The book introduces approaches to the above issues, using the proposed IGrC model. In particular, the IGrC model refers to the key mechanisms used to control the processes related to the implementation of CSE projects. One of the main aims was to develop a mechanism of IGrC control over computations that model a project’s implementation processes to maximize the chances of its success, while at the same time minimizing the emerging risks. In this regard, the IGrC control is usually performed by means of properly selected and enforced (among project participants) project principles. These principles constitute examples of c-granules, expressed by complex vague concepts (represented by c-granules too). The c-granules evolve with time (in particular, the meaning of the concepts is also subject of change). This methodology is illustrated using project principles applied by the author during the implementation of the POLTAX, AlgoTradix, Merix, and Excavio projects outlined in the book.
In this book, the following three approaches to data analysis are presented: - Test Theory, founded by Sergei V. Yablonskii (1924-1998); the first publications appeared in 1955 and 1958, - Rough Sets, founded by Zdzisław I. Pawlak (1926-2006); the first publications appeared in 1981 and 1982, - Logical Analysis of Data, founded by Peter L. Hammer (1936-2006); the first publications appeared in 1986 and 1988. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected. - Logical Analysis of Data, founded by Peter L. Hammer (1936-2006); the first publications appeared in 1986 and 1988. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected.
This comprehensive textbook on data mining details the unique steps of the knowledge discovery process that prescribes the sequence in which data mining projects should be performed, from problem and data understanding through data preprocessing to deployment of the results. This knowledge discovery approach is what distinguishes Data Mining from other texts in this area. The book provides a suite of exercises and includes links to instructional presentations. Furthermore, it contains appendices of relevant mathematical material.
Andrzej Mostowski was one of the leading 20th century logicians. This volume examines his legacy, devoted both to his scientific heritage and to the memory of him as a great researcher, teacher, organizer of science and person. It includes the bibliography of Mostowski's writings.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.