Explore new modes of creation to bring virtue back into virtual spaces. At its best, the internet channels the world into a global village of sorts, where digital citizens learn from each other, explore new modes of creation, and help others work through dilemmas in both physical and virtual spaces. Virtue in Virtual Spaces argues that the internet doesn't have to be the cultural wasteland of click-bait, partisan politics, and vulgar content that we see too often today. Technology has tremendous potential for good because of the inherent goodness of human creation and creativity which can be achieved through the development and use of technology. The authors draw from writing on virtue ethics and Catholic Social Teaching to demonstrate this potential goodness of technology. Eight of the main themes of Catholic Social Teaching are used to build a framework for designing technology to promote human flourishing. In this book, readers will engage with the philosophies behind their favorite social media platforms, examine how the design features in these platforms shape habits and imagination, and gain dialogue-based skills to bring virtue back into virtual spaces.
As all aspects of our social and informational lives increasingly migrate online, the line between what is "real" and what is digitally fabricated grows ever thinner—and that fake content has undeniable real-world consequences. A History of Fake Things on the Internet takes the long view of how advances in technology brought us to the point where faked texts, images, and video content are nearly indistinguishable from what is authentic or true. Computer scientist Walter J. Scheirer takes a deep dive into the origins of fake news, conspiracy theories, reports of the paranormal, and other deviations from reality that have become part of mainstream culture, from image manipulation in the nineteenth-century darkroom to the literary stylings of large language models like ChatGPT. Scheirer investigates the origins of Internet fakes, from early hoaxes that traversed the globe via Bulletin Board Systems (BBSs), USENET, and a new messaging technology called email, to today's hyperrealistic, AI-generated Deepfakes. An expert in machine learning and recognition, Scheirer breaks down the technical advances that made new developments in digital deception possible, and shares behind-the-screens details of early Internet-era pranks that have become touchstones of hacker lore. His story introduces us to the visionaries and mischief-makers who first deployed digital fakery and continue to influence how digital manipulation works—and doesn't—today: computer hackers, digital artists, media forensics specialists, and AI researchers. Ultimately, Scheirer argues that problems associated with fake content are not intrinsic properties of the content itself, but rather stem from human behavior, demonstrating our capacity for both creativity and destruction.
A common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, “Theory”, the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, “Practice”, specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a “gentle” introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book’s companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
The core of this three-volume book deals with damage-associated molecular patterns abbreviated “DAMPs”, which are unique molecules that save life and fight for survival of all organisms on this planet by triggering robust inflammatory/immune defense responses upon any injury, including those caused by pathogens such as viruses and bacteria. However, these molecules also have a dark side: when produced in excess upon severe insults, they can trigger serious human diseases. The three volumes present current understanding of the importance of DAMP-promoted immune responses in the etiopathogenesis of human diseases and explore how this understanding is impacting diagnosis, prognosis, and future treatment. This third volume addresses the potential of DAMPs in clinical practice, as therapeutic targets and therapeutics, by focusing on a description of antigen-related diseases, which are pathogenetically dominated by DAMPs, that is, infectious and autoimmune disorders and allograft rejection (as an undesired function of these molecules), as well as tumor rejection (as the desired function of these molecules). The book is written for professionals from all medical and paramedical disciplines who are interested in the introduction of innovative data from modern inflammation and immunity research into clinical practice. In this sense, the book reflects an approach to translational medicine. The readership will include all practitioners and clinicians, in particular, ICU clinicians, infectiologists, microbiologists, virologists, hematologists, rheumatologists, diabetologists, neurologists, transplantologists, oncologists, and pharmacists. Also available: Damage-Associated Molecular Patterns in Human Diseases - Vol. 1: Injury-Induced Innate Immune Responses; Damage-Associated Molecular Patterns in Human Diseases - Vol. 2: Danger Signals as Diagnostics, Prognostics, and Therapeutic Targets.
A common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms.
As all aspects of our social and informational lives increasingly migrate online, the line between what is "real" and what is digitally fabricated grows ever thinner—and that fake content has undeniable real-world consequences. A History of Fake Things on the Internet takes the long view of how advances in technology brought us to the point where faked texts, images, and video content are nearly indistinguishable from what is authentic or true. Computer scientist Walter J. Scheirer takes a deep dive into the origins of fake news, conspiracy theories, reports of the paranormal, and other deviations from reality that have become part of mainstream culture, from image manipulation in the nineteenth-century darkroom to the literary stylings of large language models like ChatGPT. Scheirer investigates the origins of Internet fakes, from early hoaxes that traversed the globe via Bulletin Board Systems (BBSs), USENET, and a new messaging technology called email, to today's hyperrealistic, AI-generated Deepfakes. An expert in machine learning and recognition, Scheirer breaks down the technical advances that made new developments in digital deception possible, and shares behind-the-screens details of early Internet-era pranks that have become touchstones of hacker lore. His story introduces us to the visionaries and mischief-makers who first deployed digital fakery and continue to influence how digital manipulation works—and doesn't—today: computer hackers, digital artists, media forensics specialists, and AI researchers. Ultimately, Scheirer argues that problems associated with fake content are not intrinsic properties of the content itself, but rather stem from human behavior, demonstrating our capacity for both creativity and destruction.
Explore new modes of creation to bring virtue back into virtual spaces. At its best, the internet channels the world into a global village of sorts, where digital citizens learn from each other, explore new modes of creation, and help others work through dilemmas in both physical and virtual spaces. Virtue in Virtual Spaces argues that the internet doesn't have to be the cultural wasteland of click-bait, partisan politics, and vulgar content that we see too often today. Technology has tremendous potential for good because of the inherent goodness of human creation and creativity which can be achieved through the development and use of technology. The authors draw from writing on virtue ethics and Catholic Social Teaching to demonstrate this potential goodness of technology. Eight of the main themes of Catholic Social Teaching are used to build a framework for designing technology to promote human flourishing. In this book, readers will engage with the philosophies behind their favorite social media platforms, examine how the design features in these platforms shape habits and imagination, and gain dialogue-based skills to bring virtue back into virtual spaces.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, “Theory”, the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, “Practice”, specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a “gentle” introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book’s companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.