TensorFlow Responsible AI Guidebook
Introduction
In 2018, Google introduced its AI Principles, which guide the ethical development and use of AI in research and products. In line with these principles, the TensorFlow team works to provide developers with tools and techniques to adhere to Responsible AI (RAI) practices.
In this guidebook, you’ll find guidance on how to apply tools in the Responsible AI Toolkit to develop a cohesive workflow that serves your specific use case and product needs. Tools in this guidebook include those that can be applied in domains such as fairness and transparency. This is an active area of development at Google, and you can expect this guidebook to include guidance for additional related areas, such as privacy, explainability, and robustness.
Guidebook Organization
API Documentation & Guidance
For each tool, guidance around what the tool does, where in your workflow it might fit, and its various usage considerations are provide. Where applicable an “Install” page is included in the “Guide" tab for each tool, and detailed API documentation in the "API" tab. For some tools, technical guides are provided that demonstrate concepts that users might find challenging when applying them.
Tutorials
Whenever possible, notebook tutorials are provided showing how tools in the RAI Toolkit can be applied. These are typically toy examples chosen to cast a spotlight on a specific tool. If you have questions about these, or if there are additional use cases you’d like to see explored, please reach out to the TensorFlow RAI team at tf-responsible-ai@google.com.
The following tutorials can get you started with tools for model fairness evaluation and remediation.
Introduction to Fairness Indicators
An introduction to Fairness Indicators running in a Google Colab notebook. Click the Run in Google Colab button to try it yourself.Fairness Indicators with TF Hub Text Embeddings
Apply Fairness Indicators to evaluate commonly used fairness metrics in TF Hub Text Embedding models using the Civil Comments Dataset.Fairness Indicators Lineage Case Study
Apply Fairness Indicators to examine fairness concerns in the COMPAS Dataset.Use MinDiff with Keras
Try MinDiff, a model remediation technique that can improve model performance across commonly used fairness metrics.Generate model cards with TFX
Use the Model Card Toolkit with TFX to generate Model Cards.Generate privacy reports
Assess your model's privacy using the TF Privacy Report.Additional Considerations
Designing a responsible AI workflow requires a thoughtful approach at each stage of the ML lifecycle, from problem formulation to deployment and monitoring. Beyond the details of your technical implementation, you will need to make a variety of sociotechnical decisions in order to apply these tools. Some common RAI considerations that ML practitioners need to make include:
- Across which demographic categories do I need to ensure my model performs well?
- If I must store sensitive labels in order to perform fairness evaluation, how should I consider the tradeoff between fairness and privacy?
- What metrics or definitions should I use to evaluate for fairness?
- What information should I include in my model and data transparency artifacts?
The answers to these and many other questions depend on your specific use case and product needs. As such, we cannot tell you exactly what to do, but will provide guidance for making responsible decisions, with helpful tips and links to relevant research methods whenever possible. As you develop your responsible AI workflow with TensorFlow, please provide feedback at tf-responsible-ai@google.com. Understanding your learnings and challenges is critical to our ability to build products that work for everyone.