Publisher's Synopsis
Generate different forms of machine learning model explanations to gain insight into the logic of models Learn how to measure bias in machine learning models
Key Features
- Measure group fairness, individual fairness and choose the right metric for different scenarios
- Explain model's logic using different explanation techniques
- Mitigate bias at different stages of the machine learning pipeline
Book Description
As we incorporate the next wave of AI-enabled products in high-stakes decisions, we need some level of assurance of the safety that we have come to expect from everyday products. Continuing the progress of using AI in high-stakes decisions requires trusting AI-enabled solutions to deliver their promised benefits while protecting the public from harm. Questions about the security, safety, privacy, and fairness of AI-enabled decisions need to be answered as a condition for deploying AI solutions at scale. This book is a guide that will introduce you to key concepts, use cases, tools, and techniques of the emerging field of Responsible AI. We will cover hands-on coding techniques to identify and measure bias. Measuring bias is not enough: we also need to explain and fix our models. This book outlines how to do this throughout the machine learning pipeline. By the end of this book, you will have mastered Python coding techniques of explaining machine learning models' logic, measuring their fairness at the individual and group levels and monitor them in production environments to detect degradation in their accuracy or fairness.What you will learn
- Explain the fundamental concepts of Responsible AI
- Audit models machine learning models to ascertain their group and individual fairness outcomes
- Apply explanatory techniques to gain insight into the inner logic of complex machine learning models
- Alter the development of machine learning models using pre-processing, in-processing, and post-processing techniques to mitigate biased outcomes
- Monitor machine learning models in production to identify drift and manage adverse impacts drift
- Describe emerging trends in Responsible AI
- Apply mitigation techniques to models, so that identified biases in models are remediated
- Monitor models' post-production launch degradation to ensure accuracy and fairness objectives are maintained over time
Who this book is for
Data Scientists, Machine Learning Developers, and Data Science professionals who want to ensure that their machine learning model predictions are non- biased and accurate. Working knowledge of Python programming and basic concepts of machine learning model training and data validation is good to have.
]]>