The recent financial crisis has made it paramount for the financial services industry to find new perspectives to look at their industry and, most importantly, to gain a better understanding of how the global financial system can be made less vulnerable and more resilient. The primary objective of this book is to illustrate how the safety science of Resilience Engineering can help to gain a better understanding of what the financial services system is and how to improve governance and control of financial services systems by leveraging some of its key concepts. Resilience is the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions. This definition is focused on the ability to function, rather than just to be impervious to failure, and thereby bridges the traditional conflict between productivity and safety. The core concept of the book is that the behaviour of the financial services system is the result of the tight couplings among the humans, organizations and technologies that are necessary to provide complex financial functions such as the transfer of economic resources. It is a consequence of this perspective that the risks associated with these systems cannot be understood without considering the nature of these tight couplings. Adopting this perspective, the book is designed to provide some answers to the following key questions about the financial crisis: - What actually happened? - Why and how did it happen? - Could something similar happen again? How can we see that in time and how can we control it? - How can sustainable recovery of the global financial system be established? How can its resilience be improved?
There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.
This book analyses and explains the principles behind Safety-I and Safety-II and approaches and considers the past and future of safety management practices. The analysis makes use of common examples and cases from domains such as aviation, nuclear power production, process management and health care. The final chapters explain the theoretical and practical consequences of the new, Safety-II perspective on day-to-day operations as well as on strategic management (safety culture).
Accident investigation and risk assessment have for decades focused on the human factor, particularly ‘human error’. This bias towards performance failures leads to a neglect of normal performance. It assumes that failures and successes have different origins so there is little to be gained from studying them together. Erik Hollnagel believes this assumption is false and that safety cannot be attained only by eliminating risks and failures. The alternative is to understand why things go right and to amplify that. The ETTO Principle looks at the common trait of people at work to adjust what they do to match the conditions. It proposes that this efficiency-thoroughness trade-off (ETTO) is normal. While in some cases the adjustments may lead to adverse outcomes, these are due to the same processes that produce successes.
The recent financial crisis has made it paramount for the financial services industry to find new perspectives to look at their industry and, most importantly, to gain a better understanding of how the global financial system can be made less vulnerable and more resilient. The primary objective of this book is to illustrate how the safety science of Resilience Engineering can help to gain a better understanding of what the financial services system is and how to improve governance and control of financial services systems by leveraging some of its key concepts. Resilience is the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions. This definition is focused on the ability to function, rather than just to be impervious to failure, and thereby bridges the traditional conflict between productivity and safety. The core concept of the book is that the behaviour of the financial services system is the result of the tight couplings among the humans, organizations and technologies that are necessary to provide complex financial functions such as the transfer of economic resources. It is a consequence of this perspective that the risks associated with these systems cannot be understood without considering the nature of these tight couplings. Adopting this perspective, the book is designed to provide some answers to the following key questions about the financial crisis: - What actually happened? - Why and how did it happen? - Could something similar happen again? How can we see that in time and how can we control it? - How can sustainable recovery of the global financial system be established? How can its resilience be improved?
This succinct but absorbing book covers the main way stations on James Reason’s 40-year journey in pursuit of the nature and varieties of human error. He presents an engrossing and very personal perspective, offering the reader exceptional insights, wisdom and wit as only James Reason can. A Life in Error charts the development of his seminal and hugely influential work from its original focus on individual cognitive psychology through the broadening of scope to embrace social, organizational and systemic issues.
Human error is so often cited as a cause of accidents. There is perception of a 'human error problem'. Solutions are thought to lie in changing the people or their role. The label 'human error', however, is prejudicial and hides more than it reveals about how a system malfunctions. This book takes you behind the label. It explains how human error results from social and psychological judgments by the system's stakeholders that focus only on one facet of a set of interacting contributors.
This latest edition of The Field Guide to Understanding ‘Human Error' will help you understand how to move beyond 'human error'; how to understand accidents; how to do better investigations; how to understand and improve your safety work. You will be invited to think creatively and differently about the safety issues you and your organization face. In each, you will find possibilities for a new language, for different concepts, and for new leverage points to influence your own thinking and practice, as well as that of your colleagues and organization.
Since its inception, just after the Second World War, Human Factors research has paid special attention to the issues surrounding human control of systems. Command and control environments continue to represent a challenging domain for human factors research. Modelling Command and Control takes a broad view of command and control research, to include C2 (command and control), C3 (command, control and communication), and C4 (command, control, communication and computers) as well as human supervisory control paradigms. The book presents case studies in diverse military applications (for example, land, sea and air) of command and control. The book explores the differences and similarities in the land, sea and air domains; the theoretical and methodological developments, approaches to system and interface design, and the workload and situation awareness issues involved. It places the role of humans as central and distinct from other aspects of the system. Using extensive case study material, Modelling Command and Control demonstrates how the social and technical domains interact, and why each require equal treatment and importance in the future.
Managing the Risks of Organizational Accidents introduced the notion of an ‘organizational accident’. These are rare but often calamitous events that occur in complex technological systems operating in hazardous circumstances. They stand in sharp contrast to ‘individual accidents’ whose damaging consequences are limited to relatively few people or assets. Although they share some common causal factors, they mostly have quite different causal pathways. The frequency of individual accidents - usually lost-time injuries - does not predict the likelihood of an organizational accident. The book also elaborated upon the widely-cited Swiss Cheese Model. Organizational Accidents Revisited extends and develops these ideas using a standardised causal analysis of some 10 organizational accidents that have occurred in a variety of domains in the nearly 20 years that have passed since the original was published. These analyses provide the ‘raw data’ for the process of drilling down into the underlying causal pathways. Many contributing latent conditions recur in a variety of domains. A number of these - organizational issues, design, procedures and so on - are examined in close detail in order to identify likely problems before they combine to penetrate the defences-in-depth. Where the 1997 book focused largely upon the systemic factors underlying organisational accidents, this complementary follow-up goes beyond this to examine what can be done to improve the ‘error wisdom’ and risk awareness of those on the spot; they are often the last line of defence and so have the power to halt the accident trajectory before it can cause damage. The book concludes by advocating that system safety should require the integration of systemic factors (collective mindfulness) with individual mental skills (personal mindfulness).
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.