In-vivo imaging markers of neuronal changes related to Alzheimer’s disease (AD) are ideally suited to be employed as diagnostic markers for early and differential diagnosis of AD as well as for the assessment of neurobiological effects of medical treatments in clinical trials. Novel molecular imaging techniques enable in-vivo detection of cerebral amyloid pathology, whereas magnetic resonance imaging (MRI)-based techniques, such as volumetric MRI and diffusion tensor imaging (DTI), provide structural lesion markers that allow tracking disease progression from preclinical through predementia to clinically manifest stages of AD. However, a widespread clinical use of these imaging biomarkers is hampered by considerable multi-centric variability related to differences in scanner hardware and acquisition protocols, but also by the lack of internationally agreed upon standards for analytic design and employed quantitative metrics. Several strategies for reducing multicenter variability in imaging measures have been proposed, including homogenization of the acquisition settings across scanner platforms, stringent quality assurance procedures, and artifact removal by means of post-acquisition image processing techniques. In addition, selection of appropriate statistical models to account for remaining multicenter variability in the data can further improve the accuracy and reproducibility of study results. The first projects for international standardization of image analysis methods and derived quantitative metrics have emerged recently for volumetric MRI measures. In contrast, the standardization and establishment of DTI-derived measures within a multicenter context are less well developed. Although molecular imaging techniques are already widely used in multicenter settings, sources of variability across sites and appropriate methods to reduce multicenter effects are still not explored in detail. Comparability of neuroimaging measures as AD biomarkers in worldwide clinical settings will finally depend on the establishment of internationally agreed upon standards for image acquisition, quality assurance, and employed quantitative metrics.
From 1940 to 1990, new machines and devices radically changed listening to music. Small and large single records, new kinds of jukeboxes and loudspeaker systems not only made it possible to playback music in a different way, they also evidence a fundamental transformation of music and listening itself. Taking the media and machines through which listening took place during this period, Listening Devices develops a new history of listening.Although these devices were (and often still are) easily accessible, up to now we have no concept of them. To address this gap, this volume proposes the term “listening device.” In conjunction with this concept, the book develops an original and fruitful method for exploring listening as a historical subject that has been increasingly organized in relation to technology. Case studies of four listening devices are the points of departure for the analysis, which leads the reader down unfamiliar paths, traversing the popular sound worlds of 1950s rock 'n' roll culture and the disco and club culture of the 1970s and 1980s. Despite all the characteristics specific to the different listening devices, they can nevertheless be compared because of the fundamental similarities they share: they model and manage listening, they actively mediate between the listener and the music heard, and it is this mediation that brings both listener and the music listened to into being. Ultimately, however, the intention is that the listening devices themselves should not be heard so that the music they playback can be heard. Thus, they take the history of listening to its very limits and confront it with its “other”-a history of non-listening. The book proposes “listening device” as a key concept for sound studies, popular music studies, musicology, and media studies. With this conceptual key, a new, productive understanding of past music and sound cultures of the pre-digital era can be unlocked, and, not least, of the listening culture of the digital present.
New regionalism and globalization have been prominent themes in academic and political debates since the beginning of the 1990s. Despite the considerable amount of scholarly attention that the new regionalism has received in recent years, its full empirical and theoretical potential has yet to be fully investigated. This illuminating study provides an overview of new avenues in theorizing regionalism and proposes a consolidated framework for analysis and comparison. Offering a comparative historical perspective of European and Southeast Asian regionalism, it presents new and imaginative insights into the theory and practice of regionalism and the links between regional developments, globalization and international order.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.