In this book, the following three approaches to data analysis are presented: - Test Theory, founded by Sergei V. Yablonskii (1924-1998); the first publications appeared in 1955 and 1958, - Rough Sets, founded by Zdzisław I. Pawlak (1926-2006); the first publications appeared in 1981 and 1982, - Logical Analysis of Data, founded by Peter L. Hammer (1936-2006); the first publications appeared in 1986 and 1988. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected. - Logical Analysis of Data, founded by Peter L. Hammer (1936-2006); the first publications appeared in 1986 and 1988. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected.
The book outlines selected projects conducted under the supervision of the author. Moreover, it discusses significant relations between Interactive Granular Computing (IGrC) and numerous dynamically developing scientific domains worldwide, along with features characteristic of the author’s approach to IGrC. The results presented are a continuation and elaboration of various aspects of Wisdom Technology, initiated and developed in cooperation with Professor Andrzej Skowron. Based on the empirical findings from these projects, the author explores the following areas: (a) understanding the causes of the theory and practice gap problem (TPGP) in complex systems engineering (CSE); (b) generalizing computing models of complex adaptive systems (CAS) (in particular, natural computing models) by constructing an interactive granular computing (IGrC) model of networks of interrelated interacting complex granules (c-granules), belonging to a single agent and/or to a group of agents; (c) developing methodologies based on the IGrC model to minimize the negative consequences of the TPGP. The book introduces approaches to the above issues, using the proposed IGrC model. In particular, the IGrC model refers to the key mechanisms used to control the processes related to the implementation of CSE projects. One of the main aims was to develop a mechanism of IGrC control over computations that model a project’s implementation processes to maximize the chances of its success, while at the same time minimizing the emerging risks. In this regard, the IGrC control is usually performed by means of properly selected and enforced (among project participants) project principles. These principles constitute examples of c-granules, expressed by complex vague concepts (represented by c-granules too). The c-granules evolve with time (in particular, the meaning of the concepts is also subject of change). This methodology is illustrated using project principles applied by the author during the implementation of the POLTAX, AlgoTradix, Merix, and Excavio projects outlined in the book.
This book is about Granular Computing (GC) - an emerging conceptual and of information processing. As the name suggests, GC concerns computing paradigm processing of complex information entities - information granules. In essence, information granules arise in the process of abstraction of data and derivation of knowledge from information. Information granules are everywhere. We commonly use granules of time (seconds, months, years). We granulate images; millions of pixels manipulated individually by computers appear to us as granules representing physical objects. In natural language, we operate on the basis of word-granules that become crucial entities used to realize interaction and communication between humans. Intuitively, we sense that information granules are at the heart of all our perceptual activities. In the past, several formal frameworks and tools, geared for processing specific information granules, have been proposed. Interval analysis, rough sets, fuzzy sets have all played important role in knowledge representation and processing. Subsequently, information granulation and information granules arose in numerous application domains. Well-known ideas of rule-based systems dwell inherently on information granules. Qualitative modeling, being one of the leading threads of AI, operates on a level of information granules. Multi-tier architectures and hierarchical systems (such as those encountered in control engineering), planning and scheduling systems all exploit information granularity. We also utilize information granules when it comes to functionality granulation, reusability of information and efficient ways of developing underlying information infrastructures.
This comprehensive textbook on data mining details the unique steps of the knowledge discovery process that prescribes the sequence in which data mining projects should be performed, from problem and data understanding through data preprocessing to deployment of the results. This knowledge discovery approach is what distinguishes Data Mining from other texts in this area. The book provides a suite of exercises and includes links to instructional presentations. Furthermore, it contains appendices of relevant mathematical material.
This thesis is concerned with investigating elements of computational social choice in the light of real-world applications. We contribute to a better understanding of the areas of fair allocation and multiwinner voting. For both areas, inspired by real-world scenarios, we propose several new notions and extensions of existing models. Then, we analyze the complexity of answering the computational questions raised by the introduced concepts. To this end, we look through the lens of parameterized complexity. We identify different parameters which describe natural features specific to the computational problems we investigate. Exploiting the parameters, we successfully develop efficient algorithms for spe- cific cases of the studied problems. We complement our analysis by showing which parameters presumably cannot be utilized for seeking efficient algorithms. Thereby, we provide comprehensive pictures of the computational complexity of the studied problems. Specifically, we concentrate on four topics that we present below, grouped by our two areas of interest. For all but one topic, we present experimental studies based on implementations of newly developed algorithms. We first focus on fair allocation of indivisible resources. In this setting, we consider a collection of indivisible resources and a group of agents. Each agent reports its utility evaluation of every resource and the task is to “fairly” allocate the resources such that each resource is allocated to at most one agent. We concentrate on the two following issues regarding this scenario. The social context in fair allocation of indivisible resources. In many fair allocation settings, it is unlikely that every agent knows all other agents. For example, consider a scenario where the agents represent employees of a large corporation. It is highly unlikely that every employee knows every other employee. Motivated by such settings, we come up with a new model of graph envy-freeness by adapting the classical envy-freeness notion to account for social relations of agents modeled as social networks. We show that if the given social network of agents is simple (for example, if it is a directed acyclic graph), then indeed we can sometimes find fair allocations efficiently. However, we contrast tractability results with showing NP-hardness for several cases, including those in which the given social network has a constant degree. Fair allocations among few agents with bounded rationality. Bounded rationality is the idea that humans, due to cognitive limitations, tend to simplify problems that they face. One of its emanations is that human agents usually tend to report simple utilities over the resources that they want to allocate; for example, agents may categorize the available resources only into two groups of desirable and undesirable ones. Applying techniques for solving integer linear programs, we show that exploiting bounded rationality leads to efficient algorithms for finding envy-free and Pareto-efficient allocations, assuming a small number of agents. Further, we demonstrate that our result actually forms a framework that can be applied to a number of different fairness concepts like envy-freeness up to one good or envy-freeness up to any good. This way, we obtain efficient algorithms for a number of fair allocation problems (assuming few agents with bounded rationality). We also empirically show that our technique is applicable in practice. Further, we study multiwinner voting, where we are given a collection of voters and their preferences over a set of candidates. The outcome of a multiwinner voting rule is a group (or a set of groups in case of ties) of candidates that reflect the voters’ preferences best according to some objective. In this context, we investigate the following themes. The robustness of election outcomes. We study how robust outcomes of multiwinner elections are against possible mistakes made by voters. Assuming that each voter casts a ballot in a form of a ranking of candidates, we represent a mistake by a swap of adjacent candidates in a ballot. We find that for rules such as SNTV, k-Approval, and k-Borda, it is computationally easy to find the minimum number of swaps resulting in a change of an outcome. This task is, however, NP-hard for STV and the Chamberlin-Courant rule. We conclude our study of robustness with experimentally studying the average number of random swaps leading to a change of an outcome for several rules. Strategic voting in multiwinner elections. We ask whether a given group of cooperating voters can manipulate an election outcome in a favorable way. We focus on the k-Approval voting rule and we show that the computational complexity of answering the posed question has a rich structure. We spot several cases for which our problem is polynomial-time solvable. However, we also identify NP-hard cases. For several of them, we show how to circumvent the hardness by fixed-parameter tractability. We also present experimental studies indicating that our algorithms are applicable in practice. Diese Arbeit befasst sich mit der Untersuchung von Themen des Forschungsgebiets Computational Social Choice im Lichte realer Anwendungen. Dabei trägt sie zu einem besseren Verständnis der Bereiche der fairen Zuordnung und der Mehrgewinnerwahlen bei. Für beide Konzepte schlagen wir – inspiriert von realen Anwendungen – verschiedene neue Begriffe und Erweiterungen bestehender Modelle vor. Anschließend analysieren wir die Komplexität der Beantwortung von Berechnungsfragen, die durch die eingeführten Konzepte aufgeworfen werden. Dabei fokussieren wir uns auf die parametrisierte Komplexität. Hierzu identifizieren wir verschiedene Parameter, welche natürliche Merkmale der von uns untersuchten Berechnungsprobleme beschreiben. Durch die Nutzung dieser Parameter entwickeln wir erfolgreich effiziente Algorithmen für Spezialfälle der untersuchten Probleme. Wir ergänzen unsere Analyse indem wir zeigen, welche Parameter vermutlich nicht verwendet werden können um effiziente Algorithmen zu finden. Dabei zeichnen wir ein umfassendes Bild der Berechnungskomplexität der untersuchten Probleme. Insbesondere konzentrieren wir uns auf vier Themen, die wir, gruppiert nach unseren beiden Schwerpunkten, unten vorstellen. Für alle Themen bis auf eines präsentieren wir Experimente, die auf Implementierungen der von uns neu entwickelten Algorithmen basieren. Wir konzentrieren uns zunächst auf die faire Zuordnung unteilbarer Ressourcen. Hier betrachten wir eine Menge unteilbarer Ressourcen und eine Gruppe von Agenten. Jeder Agent gibt eine Bewertung des Nutzens jeder Ressource ab und die Aufgabe besteht darin, eine "faire" Zuordnung der Ressourcen zu finden, wobei jede Ressource höchstens einem Agenten zugeordnet werden kann. Innerhalb dieses Bereiches konzentrieren wir uns auf die beiden folgenden Problemstellungen. Der soziale Kontext bei der fairen Zuordnung unteilbarer Ressourcen. In vielen Szenarien, in denen Ressourcen zugeordnet werden sollen, ist es unwahrscheinlich, dass jeder Agent alle anderen kennt. Vorstellbar ist beispielsweise ein Szenario, in dem die Agenten Mitarbeiter eines großen Unternehmens repräsentieren. Es ist höchst unwahrscheinlich, dass jeder Mitarbeiter jeden anderen Mitarbeiter kennt. Motiviert durch solche Szenarien entwickeln wir ein neues Modell der graph-basierten Neidfreiheit. Wir erweitern den klassischen Neidfreiheitsbegriff um die sozialen Beziehungen von Agenten, die durch soziale Netzwerke modelliert werden. Einerseits zeigen wir, dass wenn das soziale Netzwerk der Agenten einfach ist (zum Beispiel, wenn es sich um einen gerichteten azyklischen Graph handelt), in manchen Fällen faire Zuordnungen effizient gefunden werden können. Andererseits stellen wir diesen algorithmisch positiven Ergebnissen mehrere NP-schweren Fällen entgegen. Ein Beispiel für einen solchen Fall sind soziale Netzwerke mit einem konstanten Knotengrad. Faire Zuteilung an wenige Agenten mit begrenzter Rationalität. Begrenzte Rationalität beschreibt die Idee, dass Menschen aufgrund kognitiver Grenzen dazu neigen, Probleme, mit denen sie konfrontiert werden, zu vereinfachen. Eine mögliche Folge dieser Grenzen ist, dass menschliche Agenten in der Regel einfache Bewertungen der gewünschten Ressourcen abgeben; beispielsweise könnten Agenten die verfügbaren Ressourcen nur in zwei Gruppen, erwünschte und unerwünschte Ressourcen, kategorisieren. Durch Anwendung von Techniken zum Lösen von Ganzzahligen Linearen Programmen zeigen wir, dass unter der Annahme einer kleinen Anzahl von Agenten die Ausnutzung begrenzter Rationalität dabei hilft, effiziente Algorithmen zum Finden neidfreier und Pareto-effizienter Zuweisungen zu entwickeln. Weiterhin zeigen wir, dass unser Ergebnis ein allgemeines Verfahren liefert, welches auf eine Reihe verschiedener Fairnesskonzepte angewendet werden kann, wie zum Beispiel Neidfreiheit bis auf ein Gut oder Neidfreiheit bis auf irgendein Gut. Auf diese Weise gewinnen wir effiziente Algorithmen für eine Reihe fairer Zuordnungsprobleme (wenige Agenten mit begrenzter Rationalität vorausgesetzt). Darüber hinaus zeigen wir empirisch, dass unsere Technik in der Praxis anwendbar ist. Weiterhin untersuchen wir Mehrgewinnerwahlen, bei denen uns eine Menge von Wählern sowie ihre Präferenzen über eine Reihe von Kandidaten gegeben sind. Das Ergebnis eines Mehrgewinnerwahlverfahrens ist eine Gruppe (oder eine Menge von Gruppen im Falle eines Unentschiedens) von Kandidaten, welche die Präferenzen der Wähler am besten einem bestimmten Ziel folgend widerspiegeln. In diesem Kontext untersuchen wir die folgenden Themen. Die Robustheit von Wahlergebnissen. Wir untersuchen, wie robust die Ergebnisse von Mehrgewinnerwahlen gegenüber möglicher Fehler der Wähler sind. Unter der Annahme, dass jeder Wähler eine Stimme in Form einer Rangliste von Kandidaten abgibt, modellieren wir einen Fehler als einen Tausch benachbarter Kandidaten in der Rangliste. Wir zeigen, dass für Wahlregeln wie SNTV, k-Approval und k-Borda die minimale Anzahl an Vertauschungen, welche zu einer Ergebnisänderung führt, einfach zu berechnen ist. Für STV und die Chamberlin-Courant-Regel ist diese Aufgabe allerdings NP-schwer. Wir schließen unsere Untersuchung der Robustheit unterschiedlicher Wahlregeln ab mit einer experimentellen Evaluierung der durchschnittlichen Anzahl zufälliger Vertauschungen, die zu einer Änderung des Ergebnisses führen. Strategische Abstimmung bei Wahlen mit mehreren Gewinnern. Wir fragen, ob eine bestimmte Gruppe von kooperierenden Wählern ein Wahlergebnis zu ihren Gunsten manipulieren kann. Dabei konzentrieren wir uns auf die k-Approval-Wahlregel. Wir zeigen, dass die Berechnungskomplexität der besagten Manipulation eine reiche Struktur besitzt. Auf der einen Seite identifizieren wir mehrere Fälle in denen das Problem in Polynomzeit lösbar ist. Auf der anderen Seite identifizieren wir jedoch auch NP-schwere Fälle. Für einige von ihnen zeigen wir, wie die Berechnungsschwere durch parametrisierte Algorithmen umgangen werden kann. Wir präsentieren zudem experimentelle Untersuchungen, welche darauf hindeuten, dass unsere Algorithmen in der Praxis anwendbar sind.
Andrzej Mostowski was one of the leading 20th century logicians. This volume examines his legacy, devoted both to his scientific heritage and to the memory of him as a great researcher, teacher, organizer of science and person. It includes the bibliography of Mostowski's writings.
This book contains a cohesive, self-contained collection of theoretical and applied research results that have been achieved in this project which pertain to nonmonotonic and approximate easoning systems developed for an experimental unmanned aerial vehicle system used in the project. This book should be of interest to the theoretician and applied researcher alike and to autonomous system developers and software agent and intelligent system developers.
This is the first of two volumes which will provide a comprehensive introduction to the modern representation theory of Frobenius algebras. The first part of the book serves as a general introduction to basic results and techniques of the modern representation theory of finite dimensional associative algebras over fields, including the Morita theory of equivalences and dualities and the Auslander-Reiten theory of irreducible morphisms and almost split sequences. The second part is devoted to fundamental classical and recent results concerning the Frobenius algebras and their module categories. Moreover, the prominent classes of Frobenius algebras, the Hecke algebras of Coxeter groups, and the finite dimensional Hopf algebras over fields are exhibited. This volume is self contained and the only prerequisite is a basic knowledge of linear algebra. It includes complete proofs of all results presented and provides a rich supply of examples and exercises. The text is primarily addressed to graduate students starting research in the representation theory of algebras as well as mathematicians working in other fields.
This book is concerned with recent trends in the representation theory of algebras and its exciting interaction with geometry, topology, commutative algebra, Lie algebras, quantum groups, homological algebra, invariant theory, combinatorics, model theory and theoretical physics. The collection of articles, written by leading researchers in the field, is conceived as a sort of handbook providing easy access to the present state of knowledge and stimulating further development. The topics under discussion include diagram algebras, Brauer algebras, cellular algebras, quasi-hereditary algebras, Hall algebras, Hecke algebras, symplectic reflection algebras, Cherednik algebras, Kashiwara crystals, Fock spaces, preprojective algebras, cluster algebras, rank varieties, varieties of algebras and modules, moduli of representations of quivers, semi-invariants of quivers, Cohen-Macaulay modules, singularities, coherent sheaves, derived categories, spectral representation theory, Coxeter polynomials, Auslander-Reiten theory, Calabi-Yau triangulated categories, Poincare duality spaces, selfinjective algebras, periodic algebras, stable module categories, Hochschild cohomologies, deformations of algebras, Galois coverings of algebras, tilting theory, algebras of small homological dimensions, representation types of algebras, and model theory. This book consists of fifteen self-contained expository survey articles and is addressed to researchers and graduate students in algebra as well as a broader mathematical community. They contain a large number of open problems and give new perspectives for research in the field.
This book constitutes the refereed proceedings of the 21st International Symposium on Mathematical Foundations of Computer Science, MFCS '96, held in Crakow, Poland in September 1996. The volume presents 35 revised full papers selected from a total of 95 submissions together with 8 invited papers and 2 abstracts of invited talks. The papers included cover issues from the whole area of theoretical computer science, with a certain emphasis on mathematical and logical foundations. The 10 invited presentations are of particular value.
The book is devoted to a simplified set-theoretic version of denotational semantics where sets are used in place of Scott's reflexive domains and where jumps are described without continuations. This approach has emerged as a reaction to the sophisticated model of traditional semantics. It was also strongly stimulated by the applications of denotational semantics and especially by its software-industry oriented version known as VDM (Vienna Development Method). The new approach was successfully tested on several examples. Based on this approach the Polish Academy of Sciences created the project MetaSoft aimed at the development of a definitional metalanguage for software engineering. The approach has also been chosen in the project RAISE (ESPRIT) which aims at a similar goal. The book consists of two parts. Part One is devoted to the mathematical foundations of the future definitional metalanguage of MetaSoft. This part also introduces an appropriate notation. Part Two shows the applications of this metalanguage. There the denotational definition of a subset of Pascal is discussed with particular emphasis on Pascal types.
In this book, the following three approaches to data analysis are presented: - Test Theory, founded by Sergei V. Yablonskii (1924-1998); the first publications appeared in 1955 and 1958, - Rough Sets, founded by Zdzisław I. Pawlak (1926-2006); the first publications appeared in 1981 and 1982, - Logical Analysis of Data, founded by Peter L. Hammer (1936-2006); the first publications appeared in 1986 and 1988. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected. - Logical Analysis of Data, founded by Peter L. Hammer (1936-2006); the first publications appeared in 1986 and 1988. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected. These three approaches have much in common, but researchers active in one of these areas often have a limited knowledge about the results and methods developed in the other two. On the other hand, each of the approaches shows some originality and we believe that the exchange of knowledge can stimulate further development of each of them. This can lead to new theoretical results and real-life applications and, in particular, new results based on combination of these three data analysis approaches can be expected.
This book is dedicated to the memory of Professor Zdzis{\l}aw Pawlak who passed away almost six year ago. He is the founder of the Polish school of Artificial Intelligence and one of the pioneers in Computer Engineering and Computer Science with worldwide influence. He was a truly great scientist, researcher, teacher and a human being. This book prepared in two volumes contains more than 50 chapters. This demonstrates that the scientific approaches discovered by of Professor Zdzis{\l}aw Pawlak, especially the rough set approach as a tool for dealing with imperfect knowledge, are vivid and intensively explored by many researchers in many places throughout the world. The submitted papers prove that interest in rough set research is growing and is possible to see many new excellent results both on theoretical foundations and applications of rough sets alone or in combination with other approaches. We are proud to offer the readers this book.
The LNCS journal Transactions on Rough Sets is devoted to the entire spectrum of rough sets related issues, from logical and mathematical foundations, through all aspects of rough set theory and its applications, such as data mining, knowledge discovery, and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets and theory of evidence. This book, which constitutes the eighth volume of the Transactions on Rough Sets series, contains a wide spectrum of contributions to the theory and applications of rough sets. The 17 papers presented explore several research streams and introduce a number of new advances in the foundations and applications of artificial intelligence, engineering, logic, mathematics, and science.
In today’s society the issue of security has become a crucial one. This volume brings together contributions on the use of knowledge-based technology in security applications by the world’s leading researchers in the field.
The book outlines selected projects conducted under the supervision of the author. Moreover, it discusses significant relations between Interactive Granular Computing (IGrC) and numerous dynamically developing scientific domains worldwide, along with features characteristic of the author’s approach to IGrC. The results presented are a continuation and elaboration of various aspects of Wisdom Technology, initiated and developed in cooperation with Professor Andrzej Skowron. Based on the empirical findings from these projects, the author explores the following areas: (a) understanding the causes of the theory and practice gap problem (TPGP) in complex systems engineering (CSE); (b) generalizing computing models of complex adaptive systems (CAS) (in particular, natural computing models) by constructing an interactive granular computing (IGrC) model of networks of interrelated interacting complex granules (c-granules), belonging to a single agent and/or to a group of agents; (c) developing methodologies based on the IGrC model to minimize the negative consequences of the TPGP. The book introduces approaches to the above issues, using the proposed IGrC model. In particular, the IGrC model refers to the key mechanisms used to control the processes related to the implementation of CSE projects. One of the main aims was to develop a mechanism of IGrC control over computations that model a project’s implementation processes to maximize the chances of its success, while at the same time minimizing the emerging risks. In this regard, the IGrC control is usually performed by means of properly selected and enforced (among project participants) project principles. These principles constitute examples of c-granules, expressed by complex vague concepts (represented by c-granules too). The c-granules evolve with time (in particular, the meaning of the concepts is also subject of change). This methodology is illustrated using project principles applied by the author during the implementation of the POLTAX, AlgoTradix, Merix, and Excavio projects outlined in the book.
This book constitutes the refereed proceedings of the 7th International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing, RSFDGrC'99, held in Yamaguchi, Japan, in November 1999. The 45 revised regular papers and 15 revised short papers presented together with four invited contributions were carefully reviewed and selected from 89 submissions. The book is divided into sections on rough computing: foundations and applications, rough set theory and applications, fuzzy set theory and applications, nonclassical logic and approximate reasoning, information granulation and granular computing, data mining and knowledge discovery, machine learning, and intelligent agents and systems.
This book constitutes the refereed proceedings of the 10th International Symposium on Methodologies for Intelligent Systems, ISMIS'97, held in Charlotte, NC, USA, in October 1997. The 57 revised full papers were selected from a total of 117 submissions. Also included are four invited papers. Among the topics covered are intelligent information systems, approximate reasoning, evolutionary computation, knowledge representation and integration, learning and knowledge discovery, AI-Logics, discovery systems, data mining, query processing, etc.
A woman scientist discovers that a meltdown in a nuclear reactor was caused by a bug in a computer chip. She informs the manufacturer. The company's owner, a member of the military-industrial complex, has her killed to silence her. So begins a thriller starring lady FBI agent Esther Cruz.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.