These colab-based tutorials walk you through the main TFF concepts and APIs using practical examples. Reference documentation can be found in the TFF guides.
Getting started with federated learning
- Federated Learning for image classification introduces the key parts of the Federated Learning (FL) API, and demonstrates how to use TFF to simulate federated learning on federated MNIST-like data.
- Federated Learning for text generation further demonstrates how to use TFF's FL API to refine a serialized pre-trained model for a language modeling task.
- Tuning recommended aggregations for learning
shows how the basic FL computations in
tff.learning
can be combined with specialized aggregation routines offering robustness, differential privacy, compression, and more. - Federated Reconstruction for Matrix Factorization introduces partially local federated learning, where some client parameters are never aggregated on the server. The tutorial demonstrates how to use the Federated Learning API to train a partially local matrix factorization model.
Getting started with federated analytics
- Private Heavy Hitters shows how to use
tff.analytics.heavy_hitters
to build a federated analytics computation to discover private heavy hitters.
Writing custom federated computations
- Building Your Own Federated Learning Algorithm shows how to use the TFF Core APIs to implement federated learning algorithms, using Federated Averaging as an example.
- Composing Learning Algorithms shows how to use the TFF Learning API to easily implement new federated learning algorithms, especially variants of Federated Averaging.
- Custom Federated Algorithms, Part 1: Introduction to the Federated Core and Part 2: Implementing Federated Averaging introduce the key concepts and interfaces offered by the Federated Core API (FC API).
- Implementing Custom Aggregations explains the
design principles behind the
tff.aggregators
module and best practices for implementing custom aggregation of values from clients to server.
Simulation best practices
TFF simulation with accelerators (GPU) shows how TFF's high-performance runtime can be used with GPUs.
Working with ClientData gives best practices for integrating TFF's ClientData-based simulation datasets into TFF computations.
Intermediate and advanced tutorials
Random noise generation points out some subtleties with using randomness in decentralized computations, and proposes best practices and recommend patterns.
Sending Different Data To Particular Clients With tff.federated_select introduces the
tff.federated_select
operator and gives a simple example of a custom federated algorithm that sends different data to different clients.Client-efficient large-model federated learning via federated_select and sparse aggregation shows how TFF can be used to train a very large model where each client device only downloads and updates a small part of the model, using
tff.federated_select
and sparse aggregation.Federated Learning with Differential Privacy in TFF demonstrates how to use TFF to train models with user-level differential privacy.