In graph-based structural pattern recognition, the idea is to transform patterns into graphs and perform the analysis and recognition of patterns in the graph domain ? commonly referred to as graph matching. A large number of methods for graph matching have been proposed. Graph edit distance, for instance, defines the dissimilarity of two graphs by the amount of distortion that is needed to transform one graph into the other and is considered one of the most flexible methods for error-tolerant graph matching.This book focuses on graph kernel functions that are highly tolerant towards structural errors. The basic idea is to incorporate concepts from graph edit distance into kernel functions, thus combining the flexibility of edit distance-based graph matching with the power of kernel machines for pattern recognition. The authors introduce a collection of novel graph kernels related to edit distance, including diffusion kernels, convolution kernels, and random walk kernels. From an experimental evaluation of a semi-artificial line drawing data set and four real-world data sets consisting of pictures, microscopic images, fingerprints, and molecules, the authors demonstrate that some of the kernel functions in conjunction with support vector machines significantly outperform traditional edit distance-based nearest-neighbor classifiers, both in terms of classification accuracy and running time.
This book addresses the task of processing online handwritten notes acquired from an electronic whiteboard, which is a new modality in handwriting recognition research. The main motivation of this book is smart meeting rooms, aim to automate standard tasks usually performed by humans in a meeting.The book can be summarized as follows. A new online handwritten database is compiled, and four handwriting recognition systems are developed. Moreover, novel preprocessing and normalization strategies are designed especially for whiteboard notes and a new neural network based recognizer is applied. Commercial recognition systems are included in a multiple classifier system. The experimental results on the test set show a highly significant improvement of the recognition performance to more than 86%.
This book describes exciting new opportunities for utilizing robust graph representations of data with common machine learning algorithms. Graphs can model additional information which is often not present in commonly used data representations, such as vectors. Through the use of graph distance — a relatively new approach for determining graph similarity — the authors show how well-known algorithms, such as k-means clustering and k-nearest neighbors classification, can be easily extended to work with graphs instead of vectors. This allows for the utilization of additional information found in graph representations, while at the same time employing well-known, proven algorithms.To demonstrate and investigate these novel techniques, the authors have selected the domain of web content mining, which involves the clustering and classification of web documents based on their textual substance. Several methods of representing web document content by graphs are introduced; an interesting feature of these representations is that they allow for a polynomial time distance computation, something which is typically an NP-complete problem when using graphs. Experimental results are reported for both clustering and classification in three web document collections using a variety of graph representations, distance measures, and algorithm parameters.In addition, this book describes several other related topics, many of which provide excellent starting points for researchers and students interested in exploring this new area of machine learning further. These topics include creating graph-based multiple classifier ensembles through random node selection and visualization of graph-based data using multidimensional scaling.
This book presents novel graph-theoretic methods for complex computer vision and pattern recognition tasks. It presents the application of graph theory to low-level processing of digital images, presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, and provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks.
This monograph treats the application of numerous graph-theoretic algorithms to a comprehensive analysis of dynamic enterprise networks. Network dynamics analysis yields valuable information about network performance, efficiency, fault prediction, cost optimization, indicators and warnings. Based on many years of applied research on generic network dynamics, this work covers a number of elegant applications (including many new and experimental results) of traditional graph theory algorithms and techniques to computationally tractable network dynamics analysis to motivate network analysts, practitioners and researchers alike.
Optical character recognition and document image analysis have become very important areas with a fast growing number of researchers in the field. This comprehensive handbook with contributions by eminent experts, presents both the theoretical and practical aspects at an introductory level wherever possible.
A sharp increase in the computing power of modern computers has triggered the development of powerful algorithms that can analyze complex patterns in large amounts of data within a short time period. Consequently, it has become possible to apply pattern recognition techniques to new tasks. The main goal of this book is to cover some of the latest application domains of pattern recognition while presenting novel techniques that have been developed or customized in those domains.
Adding the time dimension to real-world databases produces Time SeriesDatabases (TSDB) and introduces new aspects and difficulties to datamining and knowledge discovery. This book covers the state-of-the-artmethodology for mining time series databases. The novel data miningmethods presented in the book include techniques for efficientsegmentation, indexing, and classification of noisy and dynamic timeseries. A graph-based method for anomaly detection in time series isdescribed and the book also studies the implications of a novel andpotentially useful representation of time series as strings. Theproblem of detecting changes in data mining models that are inducedfrom temporal databases is additionally discussed.
This book constitutes the thorougly refereed post-proceedings of an international workshop on sensor based Intelligent Robot held in Dagstuhl Castle, Germany in September/October 1998. The 17 revised full papers presented were carefully reviewed for inclusion in the book. Among the topics addressed are robot navigation, motion planning, autonomous mobile robots, wheelchair robots, interactive robots, car navigation systems, visual tracking, sensor based navigation, distributed algorithms, computer vision, intelligent agents, robot control, and computational geometry.
This book is concerned with a fundamentally novel approach to graph-based pattern recognition based on vector space embedding of graphs. It aims at condensing the high representational power of graphs into a computationally efficient and mathematically convenient feature vector. This volume utilizes the dissimilarity space representation originally proposed by Duin and Pekalska to embed graphs in real vector spaces. Such an embedding gives one access to all algorithms developed in the past for feature vectors, which has been the predominant representation formalism in pattern recognition and related areas for a long time.
This book constitutes the refereed proceedings of the 7th International Conference on Document Analysis Systems, DAS 2006, held in Nelson, New Zealand, in February 2006. The 33 revised full papers and 22 poster papers presented were carefully reviewed and selected from 78 submissions. The papers are organized in topical sections on digital libraries, image processing, handwriting, document structure and format, tables, language and script identification, systems and performance evaluation, and retrieval and segmentation.
Many business decisions are made in the absence of complete information about the decision consequences. Credit lines are approved without knowing the future behavior of the customers; stocks are bought and sold without knowing their future prices; parts are manufactured without knowing all the factors affecting their final quality; etc. All these cases can be categorized as decision making under uncertainty. Decision makers (human or automated) can handle uncertainty in different ways. Deferring the decision due to the lack of sufficient information may not be an option, especially in real-time systems. Sometimes expert rules, based on experience and intuition, are used. Decision tree is a popular form of representing a set of mutually exclusive rules. An example of a two-branch tree is: if a credit applicant is a student, approve; otherwise, decline. Expert rules are usually based on some hidden assumptions, which are trying to predict the decision consequences. A hidden assumption of the last rule set is: a student will be a profitable customer. Since the direct predictions of the future may not be accurate, a decision maker can consider using some information from the past. The idea is to utilize the potential similarity between the patterns of the past (e.g., "most students used to be profitable") and the patterns of the future (e.g., "students will be profitable").
An inadequate infrastructure for software testing is causing major losses to the world economy. The characteristics of software quality problems are quite similar to other tasks successfully tackled by artificial intelligence techniques. The aims of this book are to present state-of-the-art applications of artificial intelligence and data mining methods to quality assurance of complex software systems, and to encourage further research in this important and challenging area. Contents: Fuzzy CauseOCoEffect Models of Software Testing (W Pedrycz & G Vukovich); Black-Box Testing with Info-Fuzzy Networks (M Last & M Friedman); Automated GUI Regression Testing Using AI Planning (A M Memon); Test Set Generation and Reduction with Artificial Neural Networks (P Saraph et al.); Three-Group Software Quality Classification Modeling Using an Automated Reasoning Approach (T M Khoshgoftaar & N Seliya); Data Mining with Resampling in Software Metrics Databases (S Dick & A Kandel). Readership: Students, researchers and professionals in computer science, information systems, software testing and data mining.
The field of pattern recognition has seen enormous progress since its beginnings almost 50 years ago. A large number of different approaches have been proposed. Hybrid methods aim at combining the advantages of different paradigms within a single system. Hybrid Methods in Pattern Recognition is a collection of articles describing recent progress in this emerging field. It covers topics such as the combination of neural nets with fuzzy systems or hidden Markov models, neural networks for the processing of symbolic data structures, hybrid methods in data mining, the combination of symbolic and subsymbolic learning, and so on. Also included is recent work on multiple classifier systems. Furthermore, the book deals with applications in on-line and off-line handwriting recognition, remotely sensed image interpretation, fingerprint identification, and automatic text categorization.
In graph-based structural pattern recognition, the idea is to transform patterns into graphs and perform the analysis and recognition of patterns in the graph domain OCo commonly referred to as graph matching. A large number of methods for graph matching have been proposed. Graph edit distance, for instance, defines the dissimilarity of two graphs by the amount of distortion that is needed to transform one graph into the other and is considered one of the most flexible methods for error-tolerant graph matching.This book focuses on graph kernel functions that are highly tolerant towards structural errors. The basic idea is to incorporate concepts from graph edit distance into kernel functions, thus combining the flexibility of edit distance-based graph matching with the power of kernel machines for pattern recognition. The authors introduce a collection of novel graph kernels related to edit distance, including diffusion kernels, convolution kernels, and random walk kernels. From an experimental evaluation of a semi-artificial line drawing data set and four real-world data sets consisting of pictures, microscopic images, fingerprints, and molecules, the authors demonstrate that some of the kernel functions in conjunction with support vector machines significantly outperform traditional edit distance-based nearest-neighbor classifiers, both in terms of classification accuracy and running time.
Rapid advances in sensors, computers, and algorithms continue to fuel dramatic improvements in intelligent robots. In addition, robot vehicles are starting to appear in a number of applications. For example, they have been installed in public settings to perform such tasks as delivering items in hospitals and cleaning floors in supermarkets; recently, two small robot vehicles were launched to explore Mars.This book presents the latest advances in the principal fields that contribute to robotics. It contains contributions written by leading experts addressing topics such as Path and Motion Planning, Navigation and Sensing, Vision and Object Recognition, Environment Modeling, and others.
This book addresses the task of processing online handwritten notes acquired from an electronic whiteboard, which is a new modality in handwriting recognition research. The main motivation of this book is smart meeting rooms, aim to automate standard tasks usually performed by humans in a meeting. The book can be summarized as follows. A new online handwritten database is compiled, and four handwriting recognition systems are developed. Moreover, novel preprocessing and normalization strategies are designed especially for whiteboard notes and a new neural network based recognizer is applied. Commercial recognition systems are included in a multiple classifier system. The experimental results on the test set show a highly significant improvement of the recognition performance to more than 86%.
This edited and reviewed volume consists of papers that were originally presented at a workshop in the Scientific Center at Schloss Dagstuhl, Germany. It gives an overview of the field and presents the latest developments in the areas of modeling and planning for sensor based robots. The particular topics addressed include active vision, sensor fusion, environment modeling, motion planning, robot navigation, distributed control architectures, reactive behavior, and others.
This completely updated second edition of Radiation Exposure and Image Quality in X-ray Diagnostic Radiology provides the reader with detailed guidance on the optimization of radiological imaging. The basic physical principles of diagnostic radiology are first presented in detail, and their application to clinical problems is then carefully explored. The final section is a supplement containing tables of data and graphical depictions of X-ray spectra, interaction coefficients, characteristics of X-ray beams, and other aspects relevant to patient dose calculations. In addition, a complementary CD-ROM contains a user-friendly Excel file database covering these aspects that can be used in the reader’s own programs. This book will be an invaluable aid to medical physicists when performing calculations relating to patient dose and image quality, and will also prove useful for diagnostic radiologists and engineers.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.