This book compares and contrasts the principles and practices of rule-based machine translation (RBMT), statistical machine translation (SMT), and example-based machine translation (EBMT). Presenting numerous examples, the text introduces language divergence as the fundamental challenge to machine translation, emphasizes and works out word alignment, explores IBM models of machine translation, covers the mathematics of phrase-based SMT, provides complete walk-throughs of the working of interlingua-based and transfer-based RBMT, and analyzes EBMT, showing how translation parts can be extracted and recombined to automatically translate a new input.
The book covers several entity and relation extraction techniques starting from the traditional feature-based techniques to the recent techniques using deep neural models. Two important focus areas of the book are – i) joint extraction techniques where the tasks of entity and relation extraction are jointly solved, and ii) extraction of complex relations where relation types can be N-ary and cross-sentence. The first part of the book introduces the entity and relation extraction tasks and explains the motivation in detail. It covers all the background machine learning concepts necessary to understand the entity and relation extraction techniques explained later. The second part of the book provides a detailed survey of the traditional entity and relation extraction problems covering several techniques proposed in the last two decades. The third part of the book focuses on joint extraction techniques which attempt to address both the tasks of entity and relation extraction jointly. Several joint extraction techniques are surveyed and summarized in the book. It also covers two joint extraction techniques in detail which are based on the authors’ work. The fourth and the last part of the book focus on complex relation extraction, where the relation types may be N-ary (having more than two entity arguments) and cross-sentence (entity arguments may span multiple sentences). The book highlights several challenges and some recent techniques developed for the extraction of such complex relations including the authors’ technique. The book also covers a few domain-specific applications where the techniques for joint extraction as well as complex relation extraction are applied.
Machine Translation and Transliteration involving Related, Low-resource Languages discusses an important aspect of natural language processing that has received lesser attention: translation and transliteration involving related languages in a low-resource setting. This is a very relevant real-world scenario for people living in neighbouring states/provinces/countries who speak similar languages and need to communicate with each other, but training data to build supporting MT systems is limited. The book discusses different characteristics of related languages with rich examples and draws connections between two problems: translation for related languages and transliteration. It shows how linguistic similarities can be utilized to learn MT systems for related languages with limited data. It comprehensively discusses the use of subword-level models and multilinguality to utilize these linguistic similarities. The second part of the book explores methods for machine transliteration involving related languages based on multilingual and unsupervised approaches. Through extensive experiments over a wide variety of languages, the efficacy of these methods is established. Features Novel methods for machine translation and transliteration between related languages, supported with experiments on a wide variety of languages. An overview of past literature on machine translation for related languages. A case study about machine translation for related languages between 10 major languages from India, which is one of the most linguistically diverse country in the world. The book presents important concepts and methods for machine translation involving related languages. In general, it serves as a good reference to NLP for related languages. It is intended for students, researchers and professionals interested in Machine Translation, Translation Studies, Multilingual Computing Machine and Natural Language Processing. It can be used as reference reading for courses in NLP and machine translation. Anoop Kunchukuttan is a Senior Applied Researcher at Microsoft India. His research spans various areas on multilingual and low-resource NLP. Pushpak Bhattacharyya is a Professor at the Department of Computer Science, IIT Bombay. His research areas are Natural Language Processing, Machine Learning and AI (NLP-ML-AI). Prof. Bhattacharyya has published more than 350 research papers in various areas of NLP.
This book shows ways of augmenting the capabilities of Natural Language Processing (NLP) systems by means of cognitive-mode language processing. The authors employ eye-tracking technology to record and analyze shallow cognitive information in the form of gaze patterns of readers/annotators who perform language processing tasks. The insights gained from such measures are subsequently translated into systems that help us (1) assess the actual cognitive load in text annotation, with resulting increase in human text-annotation efficiency, and (2) extract cognitive features that, when added to traditional features, can improve the accuracy of text classifiers. In sum, the authors’ work successfully demonstrates that cognitive information gleaned from human eye-movement data can benefit modern NLP. Currently available Natural Language Processing (NLP) systems are weak AI systems: they seek to capture the functionality of human language processing, without worrying about how this processing is realized in human beings’ hardware. In other words, these systems are oblivious to the actual cognitive processes involved in human language processing. This ignorance, however, is NOT bliss! The accuracy figures of all non-toy NLP systems saturate beyond a certain point, making it abundantly clear that “something different should be done.”
This book describes the authors’ investigations of computational sarcasm based on the notion of incongruity. In addition, it provides a holistic view of past work in computational sarcasm and the challenges and opportunities that lie ahead. Sarcastic text is a peculiar form of sentiment expression and computational sarcasm refers to computational techniques that process sarcastic text. To first understand the phenomenon of sarcasm, three studies are conducted: (a) how is sarcasm annotation impacted when done by non-native annotators? (b) How is sarcasm annotation impacted when the task is to distinguish between sarcasm and irony? And (c) can targets of sarcasm be identified by humans and computers. Following these studies, the book proposes approaches for two research problems: sarcasm detection and sarcasm generation. To detect sarcasm, incongruity is captured in two ways: ‘intra-textual incongruity’ where the authors look at incongruity within the text to be classified (i.e., target text) and ‘context incongruity’ where the authors incorporate information outside the target text. These approaches use machine-learning techniques such as classifiers, topic models, sequence labelling, and word embeddings. These approaches operate at multiple levels: (a) sentiment incongruity (based on sentiment mixtures), (b) semantic incongruity (based on word embedding distance), (c) language model incongruity (based on unexpected language model), (d) author’s historical context (based on past text by the author), and (e) conversational context (based on cues from the conversation). In the second part of the book, the authors present the first known technique for sarcasm generation, which uses a template-based approach to generate a sarcastic response to user input. This book will prove to be a valuable resource for researchers working on sentiment analysis, especially as applied to automation in social media.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.