For over a century, motion pictures have entertained us, occasionally educated us, and even served a few specialized fields of study. Now, however, with the precipitous drop in prices and increase in image quality, motion pictures are as widespread as paperback books and postcards once were. Yet, theories and practices of analysis for particular genres and analytical stances, definitions, concepts, and tools that span platforms have been wanting. Therefore, we developed a suite of tools to enable close structural analysis of the time-varying signal set of a movie. We take an information-theoretic approach (message is a signal set) generated (coded) under various antecedents (sent over some channel) decoded under some other set of antecedents. Cultural, technical, and personal antecedents might favor certain message-making systems over others. The same holds true at the recipient end--yet, the signal set remains the signal set. In order to discover how movies work--their structure and meaning--we honed ways to provide pixel level analysis, forms of clustering, and precise descriptions of what parts of a signal influence viewer behavior. We assert that analysis of the signal set across the evolution of film--from Edison to Hollywood to Brakhage to cats on social media--yields a common ontology with instantiations (responses to changes in coding and decoding antecedents).
Many data-intensive applications that use machine learning or artificial intelligence techniques depend on humans providing the initial dataset, enabling algorithms to process the rest or for other humans to evaluate the performance of such algorithms. Not only can labeled data for training and evaluation be collected faster, cheaper, and easier than ever before, but we now see the emergence of hybrid human-machine software that combines computations performed by humans and machines in conjunction. There are, however, real-world practical issues with the adoption of human computation and crowdsourcing. Building systems and data processing pipelines that require crowd computing remains difficult. In this book, we present practical considerations for designing and implementing tasks that require the use of humans and machines in combination with the goal of producing high-quality labels.
As digital collections continue to grow, the underlying technologies that serve up content also continue to expand and develop. As such, new challenges are presented that continue to test ethical ideologies in the everyday environs of practitioners. -- In this book, notions of what constitutes private information are discussed, as is the potential presence of such information in both analog and digital collections. This book lays the groundwork for introducing the topic of privacy to digital collections by providing some examples from documented real-world scenarios and making recommendations for future research. -- excerpts from back cover.
Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.
Information is essential to all human activity, and information in electronic form both amplifies and augments human information interactions. This lecture surveys some of the different classical meanings of information, focuses on the ways that electronic technologies are affecting how we think about these senses of information, and introduces an emerging sense of information that has implications for how we work, play, and interact with others. The evolutions of computers and electronic networks and people's uses and adaptations of these tools manifesting a dynamic space called cyberspace. Our traces of activity in cyberspace give rise to a new sense of information as instantaneous identity states that I term proflection of self. Proflections of self influence how others act toward us. Four classical senses of information are described as context for this new form of information. The four senses selected for inclusion here are the following: thought and memory, communication process, artifact, and energy. Human mental activity and state (thought and memory) have neurological, cognitive, and affective facets.The act of informing (communication process) is considered from the perspective of human intentionality and technical developments that have dramatically amplified human communication capabilities. Information artifacts comprise a common sense of information that gives rise to a variety of information industries. Energy is the most general sense of information and is considered from the point of view of physical, mental, and social state change. This sense includes information theory as a measurable reduction in uncertainty. This lecture emphasizes how electronic representations have blurred media boundaries and added computational behaviors that yield new forms of information interaction, which, in turn, are stored, aggregated, and mined to create profiles that represent our cyber identities. Table of Contents: The Many Meanings of Information / Information as Thought and Memory / Information as Communication Process / Information as Artifact / Information as Energy / Information as Identity in Cyberspace: The Fifth Voice / Conclusion and Directions
Information Architecture is about organizing and simplifying information, designing and integrating information spaces/systems, and creating ways for people to find and interact with information content. Its goal is to help people understand and manage information and make the right decisions accordingly. This updated and revised edition of the book looks at integrated information spaces in the web context and beyond, with a focus on putting theories and principles into practice. In the ever-changing social, organizational, and technological contexts, information architects not only design individual information spaces (e.g., websites, software applications, and mobile devices), but also tackle strategic aggregation and integration of multiple information spaces across websites, channels, modalities, and platforms. Not only do they create predetermined navigation pathways, but they also provide tools and rules for people to organize information on their own and get connected with others. Information architects work with multi-disciplinary teams to determine the user experience strategy based on user needs and business goals, and make sure the strategy gets carried out by following the user-centered design (UCD) process via close collaboration with others. Drawing on the authors’ extensive experience as HCI researchers, User Experience Design practitioners, and Information Architecture instructors, this book provides a balanced view of the IA discipline by applying theories, design principles, and guidelines to IA and UX practices. It also covers advanced topics such as iterative design, UX decision support, and global and mobile IA considerations. Major revisions include moving away from a web-centric view toward multi-channel, multi-device experiences. Concepts such as responsive design, emerging design principles, and user-centered methods such as Agile, Lean UX, and Design Thinking are discussed and related to IA processes and practices.
Nowadays, fashion has become an essential aspect of people's daily life. As each outfit usually comprises several complementary items, such as a top, bottom, shoes, and accessories, a proper outfit largely relies on the harmonious matching of these items. Nevertheless, not everyone is good at outfit composition, especially those who have a poor fashion aesthetic. Fortunately, in recent years the number of online fashion-oriented communities, like IQON and Chictopia, as well as e-commerce sites, like Amazon and eBay, has grown. The tremendous amount of real-world data regarding people's various fashion behaviors has opened a door to automatic clothing matching. Despite its significant value, compatibility modeling for clothing matching that assesses the compatibility score for a given set of (equal or more than two) fashion items, e.g., a blouse and a skirt, yields tough challenges: (a) the absence of comprehensive benchmark; (b) comprehensive compatibility modeling with the multi-modal feature variables is largely untapped; (c) how to utilize the domain knowledge to guide the machine learning; (d) how to enhance the interpretability of the compatibility modeling; and (e) how to model the user factor in the personalized compatibility modeling. These challenges have been largely unexplored to date. In this book, we shed light on several state-of-the-art theories on compatibility modeling. In particular, to facilitate the research, we first build three large-scale benchmark datasets from different online fashion websites, including IQON and Amazon. We then introduce a general data-driven compatibility modeling scheme based on advanced neural networks. To make use of the abundant fashion domain knowledge, i.e., clothing matching rules, we next present a novel knowledge guided compatibility modeling framework. Thereafter, to enhance the model interpretability, we put forward a prototype wise interpretable compatibility modeling approach. Following that, noticing the subjective aesthetics of users, we extend the general compatibility modeling to the personalized version. Moreover, we further study the real-world problem of personalized capsule wardrobe creation, aiming to generate a minimum collection of garments that is both compatible and suitable for the user. Finally, we conclude the book and present future research directions, such as the generative compatibility modeling, virtual try-on with arbitrary poses, and clothing generation.
Since user study design has been widely applied in search interactions and information retrieval (IR) systems evaluation studies, a deep reflection and meta-evaluation of interactive IR (IIR) user studies is critical for sharpening the instruments of IIR research and improving the reliability and validity of the conclusions drawn from IIR user studies. To this end, we developed a faceted framework for supporting user study design, reporting, and evaluation based on a systematic review of the state-of-the-art IIR research papers recently published in several top IR venues (n=462). Within the framework, we identify three major types of research focuses, extract and summarize facet values from specific cases, and highlight the under-reported user study components which may significantly affect the results of research. Then, we employ the faceted framework in evaluating a series of IIR user studies against their respective research questions and explain the roles and impacts of the underlying connections and "collaborations" among different facet values. Through bridging diverse combinations of facet values with the study design decisions made for addressing research problems, the faceted framework can shed light on IIR user study design, reporting, and evaluation practices and help students and young researchers design and assess their own studies.
Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion
Information is essential to all human activity, and information in electronic form both amplifies and augments human information interactions. This lecture surveys some of the different classical meanings of information, focuses on the ways that electronic technologies are affecting how we think about these senses of information, and introduces an emerging sense of information that has implications for how we work, play, and interact with others. The evolutions of computers and electronic networks and people's uses and adaptations of these tools manifesting a dynamic space called cyberspace. Our traces of activity in cyberspace give rise to a new sense of information as instantaneous identity states that I term proflection of self. Proflections of self influence how others act toward us. Four classical senses of information are described as context for this new form of information. The four senses selected for inclusion here are the following: thought and memory, communication process, artifact, and energy. Human mental activity and state (thought and memory) have neurological, cognitive, and affective facets.The act of informing (communication process) is considered from the perspective of human intentionality and technical developments that have dramatically amplified human communication capabilities. Information artifacts comprise a common sense of information that gives rise to a variety of information industries. Energy is the most general sense of information and is considered from the point of view of physical, mental, and social state change. This sense includes information theory as a measurable reduction in uncertainty. This lecture emphasizes how electronic representations have blurred media boundaries and added computational behaviors that yield new forms of information interaction, which, in turn, are stored, aggregated, and mined to create profiles that represent our cyber identities. Table of Contents: The Many Meanings of Information / Information as Thought and Memory / Information as Communication Process / Information as Artifact / Information as Energy / Information as Identity in Cyberspace: The Fifth Voice / Conclusion and Directions
Information Architecture is about organizing and simplifying information, designing and integrating information spaces/systems, and creating ways for people to find and interact with information content. Its goal is to help people understand and manage information and make the right decisions accordingly. This updated and revised edition of the book looks at integrated information spaces in the web context and beyond, with a focus on putting theories and principles into practice. In the ever-changing social, organizational, and technological contexts, information architects not only design individual information spaces (e.g., websites, software applications, and mobile devices), but also tackle strategic aggregation and integration of multiple information spaces across websites, channels, modalities, and platforms. Not only do they create predetermined navigation pathways, but they also provide tools and rules for people to organize information on their own and get connected with others. Information architects work with multi-disciplinary teams to determine the user experience strategy based on user needs and business goals, and make sure the strategy gets carried out by following the user-centered design (UCD) process via close collaboration with others. Drawing on the authors’ extensive experience as HCI researchers, User Experience Design practitioners, and Information Architecture instructors, this book provides a balanced view of the IA discipline by applying theories, design principles, and guidelines to IA and UX practices. It also covers advanced topics such as iterative design, UX decision support, and global and mobile IA considerations. Major revisions include moving away from a web-centric view toward multi-channel, multi-device experiences. Concepts such as responsive design, emerging design principles, and user-centered methods such as Agile, Lean UX, and Design Thinking are discussed and related to IA processes and practices.
Information and computer technology arrived in classrooms more than three decades ago. Despite the efforts of educators and technologists, much teaching and learning has remained unchanged since it arrived. This is in contrast to the widespread adoption of computer technology in many other endeavors. Changing education to reflect the dominant role of technology in society requires understanding how technology has influenced (and continues to influence) several aspects of schools. Each of these is detailed in this book. The effects of technology on the digital generations who are now enrolled in schools are described, as is the nature of the technology-mediated interaction that will prepare these generations for an unpredictable future. Strategies and approaches for curriculum design, professional development, and other aspects of school organization are presented as well. Teachers, school leaders, technology leaders will find valuable guidance for refreshing teaching and learning that makes use of technology.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.