Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the {\it finite-dimensional covariance matrix} representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the {\it infinite-dimensional covariance operator} representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
Group and Crowd Behavior for Computer Vision provides a multidisciplinary perspective on how to solve the problem of group and crowd analysis and modeling, combining insights from the social sciences with technological ideas in computer vision and pattern recognition. The book answers many unresolved issues in group and crowd behavior, with Part One providing an introduction to the problems of analyzing groups and crowds that stresses that they should not be considered as completely diverse entities, but as an aggregation of people. Part Two focuses on features and representations with the aim of recognizing the presence of groups and crowds in image and video data. It discusses low level processing methods to individuate when and where a group or crowd is placed in the scene, spanning from the use of people detectors toward more ad-hoc strategies to individuate group and crowd formations. Part Three discusses methods for analyzing the behavior of groups and the crowd once they have been detected, showing how to extract semantic information, predicting/tracking the movement of a group, the formation or disaggregation of a group/crowd and the identification of different kinds of groups/crowds depending on their behavior. The final section focuses on identifying and promoting datasets for group/crowd analysis and modeling, presenting and discussing metrics for evaluating the pros and cons of the various models and methods. This book gives computer vision researcher techniques for segmentation and grouping, tracking and reasoning for solving group and crowd modeling and analysis, as well as more general problems in computer vision and machine learning. Presents the first book to cover the topic of modeling and analysis of groups in computer vision Discusses the topics of group and crowd modeling from a cross-disciplinary perspective, using social science anthropological theories translated into computer vision algorithms Focuses on group and crowd analysis metrics Discusses real industrial systems dealing with the problem of analyzing groups and crowds
Group and Crowd Behavior for Computer Vision provides a multidisciplinary perspective on how to solve the problem of group and crowd analysis and modeling, combining insights from the social sciences with technological ideas in computer vision and pattern recognition. The book answers many unresolved issues in group and crowd behavior, with Part One providing an introduction to the problems of analyzing groups and crowds that stresses that they should not be considered as completely diverse entities, but as an aggregation of people. Part Two focuses on features and representations with the aim of recognizing the presence of groups and crowds in image and video data. It discusses low level processing methods to individuate when and where a group or crowd is placed in the scene, spanning from the use of people detectors toward more ad-hoc strategies to individuate group and crowd formations. Part Three discusses methods for analyzing the behavior of groups and the crowd once they have been detected, showing how to extract semantic information, predicting/tracking the movement of a group, the formation or disaggregation of a group/crowd and the identification of different kinds of groups/crowds depending on their behavior. The final section focuses on identifying and promoting datasets for group/crowd analysis and modeling, presenting and discussing metrics for evaluating the pros and cons of the various models and methods. This book gives computer vision researcher techniques for segmentation and grouping, tracking and reasoning for solving group and crowd modeling and analysis, as well as more general problems in computer vision and machine learning. Presents the first book to cover the topic of modeling and analysis of groups in computer vision Discusses the topics of group and crowd modeling from a cross-disciplinary perspective, using social science anthropological theories translated into computer vision algorithms Focuses on group and crowd analysis metrics Discusses real industrial systems dealing with the problem of analyzing groups and crowds
Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the {\it finite-dimensional covariance matrix} representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the {\it infinite-dimensional covariance operator} representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.