The study of most scientific fields now relies on an ever-increasing amount of data, due to instrumental and experimental progress in monitoring and manipulating complex systems made of many microscopic constituents. How can we make sense of such data, and use them to enhance our understanding of biological, physical, and chemical systems? Aimed at graduate students in physics, applied mathematics, and computational biology, the primary objective of this textbook is to introduce the concepts and methods necessary to answer this question at the intersection of probability theory, statistics, optimisation, statistical physics, inference, and machine learning. The second objective of this book is to provide practical applications for these methods, which will allow students to assimilate the underlying ideas and techniques. While readers of this textbook will need basic knowledge in programming (Python or an equivalent language), the main emphasis is not on mathematical rigour, but on the development of intuition and the deep connections with statistical physics.
The book explores aspects of self-translation, an all but exceptional phenomenon which has been practised, albeit on the quiet, for nearly two thousand years and has recently grown exponentially due to the increasing internationalisation of English and the growing multilingualism of modern societies. Starting from the premise that self-translation is first and foremost a translational act, i.e. a form of rewriting subject to a number of constraints, the book utilises the most valuable methods and findings of translation studies to account for the variety of reasons underlying self-translation processes and the diversity of strategies used by self-translators. The cases studied, from Kundera to Ngugi, and addressing writers like Beckett, Huston, Tagore, Brink, Krog and many others, show that the translation methods employed by self-translators vary considerably depending on their teloi. Nonetheless, most self-translations display domesticating tendencies similar to those observed in allograph translations, which confirms the view that self-translators, just like normal translators, are never free from the linguistic and cultural constraints imposed by the recontextualising of their texts in a new language. Most interestingly, the study brings to light certain recurring features, e.g. a tendency of author-translators to revise their original during the self-translation process or after completing it, which make self-translators privileged authors who can revise their texts in the light of the insights gained while translating.
Food represents an unalienable component of everyday life, encompassing different spheres and moments. What is more, in contemporary societies, migration, travel, and communication incessantly expose local food identities to global food alterities, activating interesting processes of transformation that continuously reshape and redefine such identities and alterities. Ethnic restaurants fill up the streets we walk, while in many city markets and supermarkets local products are increasingly complemented with spices, vegetables, and other foods required for the preparation of exotic dishes. Mass and new media constantly provide exposure to previously unknown foods, while “fusion cuisines” have become increasingly popular all over the world. But what happens to food and food-related habits, practices, and meanings when they are carried from one foodsphere to another? What are the main elements involved in such dynamics? And which theoretical and methodological approaches can help in understanding such processes? These are the main issues addressed by this book, which explores both the functioning logics and the tangible effects of one of the most important characteristics of present-day societies: eating the Other.
The study of most scientific fields now relies on an ever-increasing amount of data, due to instrumental and experimental progress in monitoring and manipulating complex systems made of many microscopic constituents. How can we make sense of such data, and use them to enhance our understanding of biological, physical, and chemical systems? Aimed at graduate students in physics, applied mathematics, and computational biology, the primary objective of this textbook is to introduce the concepts and methods necessary to answer this question at the intersection of probability theory, statistics, optimisation, statistical physics, inference, and machine learning. The second objective of this book is to provide practical applications for these methods, which will allow students to assimilate the underlying ideas and techniques. While readers of this textbook will need basic knowledge in programming (Python or an equivalent language), the main emphasis is not on mathematical rigour, but on the development of intuition and the deep connections with statistical physics.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.