AI doesn’t have to be a black box. These practical techniques help shine a light on your model’s mysterious inner workings. Make your AI more transparent, and you’ll improve trust in your results, combat data leakage and bias, and ensure compliance with legal requirements. Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. You’ll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model.
AI doesn’t have to be a black box. These practical techniques help shine a light on your model’s mysterious inner workings. Make your AI more transparent, and you’ll improve trust in your results, combat data leakage and bias, and ensure compliance with legal requirements. In Interpretable AI, you will learn: Why AI models are hard to interpret Interpreting white box models such as linear regression, decision trees, and generalized additive models Partial dependence plots, LIME, SHAP and Anchors, and other techniques such as saliency mapping, network dissection, and representational learning What fairness is and how to mitigate bias in AI systems Implement robust AI systems that are GDPR-compliant Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. You’ll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model. About the technology It’s often difficult to explain how deep learning models work, even for the data scientists who create them. Improving transparency and interpretability in machine learning models minimizes errors, reduces unintended bias, and increases trust in the outcomes. This unique book contains techniques for looking inside “black box” models, designing accountable algorithms, and understanding the factors that cause skewed results. About the book Interpretable AI teaches you to identify the patterns your model has learned and why it produces its results. As you read, you’ll pick up algorithm-specific approaches, like interpreting regression and generalized additive models, along with tips to improve performance during training. You’ll also explore methods for interpreting complex deep learning models where some processes are not easily observable. AI transparency is a fast-moving field, and this book simplifies cutting-edge research into practical methods you can implement with Python. What's inside Techniques for interpreting AI models Counteract errors from bias, data leakage, and concept drift Measuring fairness and mitigating bias Building GDPR-compliant AI systems About the reader For data scientists and engineers familiar with Python and machine learning. About the author Ajay Thampi is a machine learning engineer focused on responsible AI and fairness. Table of Contents PART 1 INTERPRETABILITY BASICS 1 Introduction 2 White-box models PART 2 INTERPRETING MODEL PROCESSING 3 Model-agnostic methods: Global interpretability 4 Model-agnostic methods: Local interpretability 5 Saliency mapping PART 3 INTERPRETING MODEL REPRESENTATIONS 6 Understanding layers and units 7 Understanding semantic similarity PART 4 FAIRNESS AND BIAS 8 Fairness and mitigating bias 9 Path to explainable AI
An overview of the current state of nanotechnology-based devices with applications in environmental science, focusing on nanomaterials and polymer nanocomposites. The handbook pays special attention to those nanotechnology-based approaches that promise easier, faster and cheaper processes in environmental monitoring and remediation. Furthermore, it presents up-to-date information on the economics, toxicity and regulations related to nanotechnology in detail. The book closes with a look at the role of nanotechnology for a green and sustainable future. With its coverage of existing and soon-to-be-realized devices this is an indispensable reference for both academic and corporate R&D.
The neighboring north Indian districts of Jaipur and Ajmer are identical in language, geography, and religious and caste demography. But when the famous Babri Mosque in Ayodhya was destroyed in 1992, Jaipur burned while Ajmer remained peaceful; when the state clashed over low-caste affirmative action quotas in 2008, Ajmer's residents rioted while Jaipur's citizens stayed calm. What explains these divergent patterns of ethnic conflict across multiethnic states? Using archival research and elite interviews in five case studies spanning north, south, and east India, as well as a quantitative analysis of 589 districts, Ajay Verghese shows that the legacies of British colonialism drive contemporary conflict. Because India served as a model for British colonial expansion into parts of Africa and Southeast Asia, this project links Indian ethnic conflict to violent outcomes across an array of multiethnic states, including cases as diverse as Nigeria and Malaysia. The Colonial Origins of Ethnic Violence in India makes important contributions to the study of Indian politics, ethnicity, conflict, and historical legacies.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.