Stay organized with collections
Save and categorize content based on your preferences.
Learn how to integrate Responsible AI practices into your ML workflow using TensorFlow
TensorFlow is committed to helping make progress in the responsible development of AI by sharing a collection of resources and tools with the ML community.
What is Responsible AI?
The development of AI is creating new opportunities to solve challenging, real-world problems. It is also raising new questions about the best way to build AI systems that benefit everyone.
Recommended best practices for AI
Designing AI systems should follow software development best practices while taking a human-centered approach to ML
Fairness
As the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive to everyone
Interpretability
Understanding and trusting AI systems is important to ensuring they are working as intended
Privacy
Training models off of sensitive data needs privacy preserving safeguards
Security
Identifying potential threats can help keep AI systems safe and secure
Responsible AI in your ML workflow
Responsible AI practices can be incorporated at every step of the ML workflow. Here are some key questions to consider at each stage.
Who is my ML system for?
The way actual users experience your system is essential to assessing
the true impact of its predictions, recommendations, and decisions. Make
sure to get input from a diverse set of users early on in your
development process.
Am I using a representative dataset?
Is your data sampled in a way that represents your users (e.g. will be
used for all ages, but you only have training data from senior citizens)
and the real-world setting (e.g. will be used year-round, but you only
have training data from the summer)?
Is there real-world/human bias in my data?
Underlying biases in data can contribute to complex feedback loops that
reinforce existing stereotypes.
What methods should I use to train my model?
Use training methods that build fairness, interpretability, privacy, and
security into the model.
How is my model performing?
Evaluate user experience in real-world scenarios across a broad spectrum
of users, use cases, and contexts of use. Test and iterate in dogfood
first, followed by continued testing after launch.
Are there complex feedback loops?
Even if everything in the overall system design is carefully crafted,
ML-based models rarely operate with 100% perfection when applied to
real, live data. When an issue occurs in a live product, consider
whether it aligns with any existing societal disadvantages, and how it
will be impacted by both short- and long-term solutions.
Responsible AI tools for TensorFlow
The TensorFlow ecosystem has a suite of tools and resources to help tackle some of the questions above.
Step 1
Define problem
Use the following resources to design models with Responsible AI in mind.
We asked participants to use TensorFlow 2.2 to build a model or application with Responsible AI principles in mind. Check out the gallery to see the winners and other amazing projects.
[null,null,[],[],[],null,["# Responsible AI\n\nLearn how to integrate Responsible AI practices into your ML workflow using TensorFlow\n======================================================================================\n\nTensorFlow is committed to helping make progress in the responsible development of AI by sharing a collection of resources and tools with the ML community. \n\nWhat is Responsible AI?\n-----------------------\n\nThe development of AI is creating new opportunities to solve challenging, real-world problems. It is also raising new questions about the best way to build AI systems that benefit everyone. \n\n#### Recommended best practices for AI\n\nDesigning AI systems should follow software development best practices while taking a human-centered \napproach to ML \n\n#### Fairness\n\nAs the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive to everyone \n\n#### Interpretability\n\nUnderstanding and trusting AI systems is important to ensuring they are working as intended \n\n#### Privacy\n\nTraining models off of sensitive data needs privacy preserving safeguards \n\n#### Security\n\nIdentifying potential threats can help keep AI systems safe and secure \n[Learn more about Google's Responsible AI Practices](https://ai.google/responsibilities/responsible-ai-practices/) \n\nResponsible AI in your ML workflow\n----------------------------------\n\nResponsible AI practices can be incorporated at every step of the ML workflow. Here are some key questions to consider at each stage. \nDefine problem Construct and prepare data Build and train model Evaluate model Deploy and monitor \n\n### Who is my ML system for?\n\nThe way actual users experience your system is essential to assessing\nthe true impact of its predictions, recommendations, and decisions. Make\nsure to get input from a diverse set of users early on in your\ndevelopment process. \n\n### Am I using a representative dataset?\n\nIs your data sampled in a way that represents your users (e.g. will be\nused for all ages, but you only have training data from senior citizens)\nand the real-world setting (e.g. will be used year-round, but you only\nhave training data from the summer)? \n\n### Is there real-world/human bias in my data?\n\nUnderlying biases in data can contribute to complex feedback loops that\nreinforce existing stereotypes. \n\n### What methods should I use to train my model?\n\nUse training methods that build fairness, interpretability, privacy, and\nsecurity into the model. \n\n### How is my model performing?\n\nEvaluate user experience in real-world scenarios across a broad spectrum\nof users, use cases, and contexts of use. Test and iterate in dogfood\nfirst, followed by continued testing after launch. \n\n### Are there complex feedback loops?\n\nEven if everything in the overall system design is carefully crafted,\nML-based models rarely operate with 100% perfection when applied to\nreal, live data. When an issue occurs in a live product, consider\nwhether it aligns with any existing societal disadvantages, and how it\nwill be impacted by both short- and long-term solutions. \n\nResponsible AI tools for TensorFlow\n-----------------------------------\n\nThe TensorFlow ecosystem has a suite of tools and resources to help tackle some of the questions above. \nStep 1\n\nDefine problem\n--------------\n\nUse the following resources to design models with Responsible AI in mind. \n[People + AI Research (PAIR) Guidebook](https://pair.withgoogle.com/guidebook/) \nLearn more about the AI development process and key considerations. \n[Learn more](https://pair.withgoogle.com/guidebook/) \n[PAIR Explorables](https://pair.withgoogle.com/explorables/) \nExplore, via interactive visualizations, key questions and concepts in the realm of Responsible AI. \n[Learn more](https://pair.withgoogle.com/explorables/) \nStep 2\n\nConstruct and prepare data\n--------------------------\n\nUse the following tools to examine data for potential biases. \n[Know Your Data (Beta)](https://knowyourdata.withgoogle.com/) \nInteractively investigate your dataset to improve data quality and mitigate fairness and bias issues. \n[Learn more](https://knowyourdata.withgoogle.com/) \n[TF Data Validation](/tfx/guide/tfdv) \nAnalyze and transform data to detect problems and engineer more effective feature sets. \n[Learn more](/tfx/guide/tfdv) \n[Data Cards](https://research.google/static/documents/datasets/crowdsourced-high-quality-colombian-spanish-es-co-multi-speaker-speech-dataset.pdf) \nCreate a transparency report for your dataset. \n[Learn more](https://research.google/static/documents/datasets/crowdsourced-high-quality-colombian-spanish-es-co-multi-speaker-speech-dataset.pdf) \n[Monk Skin Tone Scale (MST)](https://www.skintone.google/) \nA more inclusive skin tone scale, open licensed, to make your data collection and model building needs more robust and inclusive. \n[Learn more](https://www.skintone.google/) \nStep 3\n\nBuild and train model\n---------------------\n\nUse the following tools to train models using privacy-preserving, interpretable techniques, and more. \n[TF Model Remediation](/responsible_ai/model_remediation) \nTrain machine learning models to promote more equitable outcomes. \n[Learn more](/responsible_ai/model_remediation) \n[TF Privacy](/responsible_ai/privacy/guide) \nTrain machine learning models with privacy. \n[Learn more](/responsible_ai/privacy/guide) \n[TF Federated](/federated) \nTrain machine learning models using federated learning techniques. \n[Learn more](/federated) \n[TF Constrained Optimization](https://github.com/google-research/tensorflow_constrained_optimization/blob/master/README.md) \nOptimize inequality-constrained problems. \n[Learn more](https://github.com/google-research/tensorflow_constrained_optimization/blob/master/README.md) \n[TF Lattice](/lattice/overview) \nImplement flexible, controlled, and interpretable lattice-based models. \n[Learn more](/lattice/overview) \nStep 4\n\nEvaluate model\n--------------\n\nDebug, evaluate, and visualize model performance using the following tools. \n[Fairness Indicators](/responsible_ai/fairness_indicators/guide) \nEvaluate commonly-identified fairness metrics for binary and multi-class classifiers. \n[Learn more](/responsible_ai/fairness_indicators/guide) \n[TF Model Analysis](/tfx/model_analysis/install) \nEvaluate models in a distributed manner and compute over different slices of data. \n[Learn more](/tfx/model_analysis/install) \n[What-If Tool](https://pair-code.github.io/what-if-tool/) \nExamine, evaluate, and compare machine learning models. \n[Learn more](https://pair-code.github.io/what-if-tool/) \n[Language Interpretability Tool](https://pair-code.github.io/lit/) \nVisualize and understand NLP models. \n[Learn more](https://pair-code.github.io/lit/) \n[Explainable AI](https://cloud.google.com/explainable-ai) \nDevelop interpretable and inclusive machine learning models. \n[Learn more](https://cloud.google.com/explainable-ai) \n[TF Privacy Tests](https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html) \nAssess the privacy properties of classification models. \n[Learn more](https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html) \n[TensorBoard](/tensorboard/get_started) \nMeasure and visualize the machine learning workflow. \n[Learn more](/tensorboard/get_started) \nStep 5\n\nDeploy and monitor\n------------------\n\nUse the following tools to track and communicate about model context and details. \n[Model Card Toolkit](/responsible_ai/model_card_toolkit/guide) \nGenerate model cards with ease using the Model Card toolkit. \n[Learn more](/responsible_ai/model_card_toolkit/guide) \n[ML Metadata](/tfx/guide/mlmd) \nRecord and retrieve metadata associated with ML developer and data scientist workflows. \n[Learn more](/tfx/guide/mlmd) \n[Model Cards](https://modelcards.withgoogle.com/about) \nOrganize the essential facts of machine learning in a structured way. \n[Learn more](https://modelcards.withgoogle.com/about) \n\nCommunity resources\n-------------------\n\nLearn what the community is doing and explore ways to get involved. \n[Crowdsource by Google](https://crowdsource.google.com/) \nHelp Google's products become more inclusive and representative of your language, region and culture. \n[Learn more](https://crowdsource.google.com/) \n[Responsible AI DevPost Challenge](https://responsible-ai.devpost.com/) \nWe asked participants to use TensorFlow 2.2 to build a model or application with Responsible AI principles in mind. Check out the gallery to see the winners and other amazing projects. \n[Learn more](https://responsible-ai.devpost.com/) \n\nResponsible AI with TensorFlow (TF Dev Summit '20) \nIntroducing a framework to think about ML, fairness and privacy. \n\nWatch the video\n\n*close* \n\nExplore Google AI resources to guide your AI/ML journey\n-------------------------------------------------------\n\n[See AI principles](https://ai.google/responsibilities/#our-principles)"]]