A common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms.
As all aspects of our social and informational lives increasingly migrate online, the line between what is "real" and what is digitally fabricated grows ever thinner—and that fake content has undeniable real-world consequences. A History of Fake Things on the Internet takes the long view of how advances in technology brought us to the point where faked texts, images, and video content are nearly indistinguishable from what is authentic or true. Computer scientist Walter J. Scheirer takes a deep dive into the origins of fake news, conspiracy theories, reports of the paranormal, and other deviations from reality that have become part of mainstream culture, from image manipulation in the nineteenth-century darkroom to the literary stylings of large language models like ChatGPT. Scheirer investigates the origins of Internet fakes, from early hoaxes that traversed the globe via Bulletin Board Systems (BBSs), USENET, and a new messaging technology called email, to today's hyperrealistic, AI-generated Deepfakes. An expert in machine learning and recognition, Scheirer breaks down the technical advances that made new developments in digital deception possible, and shares behind-the-screens details of early Internet-era pranks that have become touchstones of hacker lore. His story introduces us to the visionaries and mischief-makers who first deployed digital fakery and continue to influence how digital manipulation works—and doesn't—today: computer hackers, digital artists, media forensics specialists, and AI researchers. Ultimately, Scheirer argues that problems associated with fake content are not intrinsic properties of the content itself, but rather stem from human behavior, demonstrating our capacity for both creativity and destruction.
Explore new modes of creation to bring virtue back into virtual spaces. At its best, the internet channels the world into a global village of sorts, where digital citizens learn from each other, explore new modes of creation, and help others work through dilemmas in both physical and virtual spaces. Virtue in Virtual Spaces argues that the internet doesn't have to be the cultural wasteland of click-bait, partisan politics, and vulgar content that we see too often today. Technology has tremendous potential for good because of the inherent goodness of human creation and creativity which can be achieved through the development and use of technology. The authors draw from writing on virtue ethics and Catholic Social Teaching to demonstrate this potential goodness of technology. Eight of the main themes of Catholic Social Teaching are used to build a framework for designing technology to promote human flourishing. In this book, readers will engage with the philosophies behind their favorite social media platforms, examine how the design features in these platforms shape habits and imagination, and gain dialogue-based skills to bring virtue back into virtual spaces.
A common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, “Theory”, the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, “Practice”, specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a “gentle” introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book’s companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
As all aspects of our social and informational lives increasingly migrate online, the line between what is "real" and what is digitally fabricated grows ever thinner—and that fake content has undeniable real-world consequences. A History of Fake Things on the Internet takes the long view of how advances in technology brought us to the point where faked texts, images, and video content are nearly indistinguishable from what is authentic or true. Computer scientist Walter J. Scheirer takes a deep dive into the origins of fake news, conspiracy theories, reports of the paranormal, and other deviations from reality that have become part of mainstream culture, from image manipulation in the nineteenth-century darkroom to the literary stylings of large language models like ChatGPT. Scheirer investigates the origins of Internet fakes, from early hoaxes that traversed the globe via Bulletin Board Systems (BBSs), USENET, and a new messaging technology called email, to today's hyperrealistic, AI-generated Deepfakes. An expert in machine learning and recognition, Scheirer breaks down the technical advances that made new developments in digital deception possible, and shares behind-the-screens details of early Internet-era pranks that have become touchstones of hacker lore. His story introduces us to the visionaries and mischief-makers who first deployed digital fakery and continue to influence how digital manipulation works—and doesn't—today: computer hackers, digital artists, media forensics specialists, and AI researchers. Ultimately, Scheirer argues that problems associated with fake content are not intrinsic properties of the content itself, but rather stem from human behavior, demonstrating our capacity for both creativity and destruction.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.