TensorFlow 통합 튜토리얼
컬렉션을 사용해 정리하기
내 환경설정을 기준으로 콘텐츠를 저장하고 분류하세요.
이러한 Colab 기반 튜토리얼은 실제 사례를 사용하여 주요 TFF 개념과 API를 안내합니다. 참조 문서는 TFF 가이드 에서 찾을 수 있습니다.
연합 학습 시작하기
- 이미지 분류를 위한 연합 학습에서는 FL(Federated Learning) API의 핵심 부분을 소개하고 TFF를 사용하여 연합 MNIST 유사 데이터에 대한 연합 학습을 시뮬레이션하는 방법을 보여줍니다.
- 텍스트 생성을 위한 연합 학습은 TFF의 FL API를 사용하여 언어 모델링 작업을 위해 직렬화된 사전 훈련된 모델을 개선하는 방법을 추가로 보여줍니다.
- 학습을 위한 권장 집계 조정에서는
tff.learning
의 기본 FL 계산이 견고성, 차등 개인 정보 보호, 압축 등을 제공하는 특수 집계 루틴과 결합될 수 있는 방법을 보여줍니다. - 행렬 인수분해를 위한 연합 재구성은 일부 클라이언트 매개변수가 서버에 집계되지 않는 부분적인 로컬 연합 학습을 도입합니다. 이 튜토리얼에서는 Federated Learning API를 사용하여 부분적으로 국소적인 행렬 분해 모델을 훈련하는 방법을 보여줍니다.
연합 분석 시작하기
사용자 정의 연합 계산 작성
시뮬레이션 모범 사례
중급 및 고급 튜토리얼
달리 명시되지 않는 한 이 페이지의 콘텐츠에는 Creative Commons Attribution 4.0 라이선스에 따라 라이선스가 부여되며, 코드 샘플에는 Apache 2.0 라이선스에 따라 라이선스가 부여됩니다. 자세한 내용은 Google Developers 사이트 정책을 참조하세요. 자바는 Oracle 및/또는 Oracle 계열사의 등록 상표입니다.
최종 업데이트: 2025-07-25(UTC)
[null,null,["최종 업데이트: 2025-07-25(UTC)"],[],[],null,["# TensorFlow Federated Tutorials\n\n\u003cbr /\u003e\n\nThese [colab-based](https://colab.research.google.com/) tutorials walk you\nthrough the main TFF concepts and APIs using practical examples. Reference\ndocumentation can be found in the [TFF guides](../get_started).\n| **Note:** TFF currently requires Python 3.9 or later, but [Google Colaboratory](https://research.google.com/colaboratory/)'s hosted runtimes currently use Python 3.7, and so in order to run these notebooks you will need to use a [custom local runtime](https://research.google.com/colaboratory/local-runtimes.html).\n\n**Getting started with federated learning**\n\n- [Federated Learning for image classification](/federated/tutorials/federated_learning_for_image_classification) introduces the key parts of the Federated Learning (FL) API, and demonstrates how to use TFF to simulate federated learning on federated MNIST-like data.\n- [Federated Learning for text generation](/federated/tutorials/federated_learning_for_text_generation) further demonstrates how to use TFF's FL API to refine a serialized pre-trained model for a language modeling task.\n- [Tuning recommended aggregations for learning](/federated/tutorials/tuning_recommended_aggregators) shows how the basic FL computations in [`tff.learning`](https://www.tensorflow.org/federated/api_docs/python/tff/learning) can be combined with specialized aggregation routines offering robustness, differential privacy, compression, and more.\n- [Federated Reconstruction for Matrix Factorization](/federated/tutorials/federated_reconstruction_for_matrix_factorization) introduces partially local federated learning, where some client parameters are never aggregated on the server. The tutorial demonstrates how to use the Federated Learning API to train a partially local matrix factorization model.\n\n**Getting started with federated analytics**\n\n- [Private Heavy Hitters](/federated/tutorials/private_heavy_hitters) shows how to use [`tff.analytics.heavy_hitters`](https://www.tensorflow.org/federated/api_docs/python/tff/analytics/heavy_hitters) to build a federated analytics computation to discover private heavy hitters.\n\n**Writing custom federated computations**\n\n- [Building Your Own Federated Learning Algorithm](/federated/tutorials/building_your_own_federated_learning_algorithm) shows how to use the TFF Core APIs to implement federated learning algorithms, using Federated Averaging as an example.\n- [Composing Learning Algorithms](/federated/tutorials/composing_learning_algorithms) shows how to use the TFF Learning API to easily implement new federated learning algorithms, especially variants of Federated Averaging.\n- [Custom Federated Algorithms, Part 1: Introduction to the Federated Core](/federated/tutorials/custom_federated_algorithms_1) and [Part 2: Implementing Federated Averaging](/federated/tutorials/custom_federated_algorithms_2) introduce the key concepts and interfaces offered by the Federated Core API (FC API).\n- [Implementing Custom Aggregations](/federated/tutorials/custom_aggregators) explains the design principles behind the [`tff.aggregators`](https://www.tensorflow.org/federated/api_docs/python/tff/aggregators) module and best practices for implementing custom aggregation of values from clients to server.\n\n**Simulation best practices**\n\n- [TFF simulation with accelerators (GPU)](/federated/tutorials/simulations_with_accelerators)\n shows how TFF's high-performance runtime can be used with GPUs.\n\n- [Working with ClientData](/federated/tutorials/working_with_client_data) gives best\n practices for integrating TFF's\n [ClientData](https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/ClientData)-based\n simulation datasets into TFF computations.\n\n**Intermediate and advanced tutorials**\n\n- [Random noise generation](/federated/tutorials/random_noise_generation) points out some\n subtleties with using randomness in decentralized computations, and proposes\n best practices and recommend patterns.\n\n- [Sending Different Data To Particular Clients With\n federated_language.federated_select](/federated/tutorials/federated_select) introduces the\n `federated_language.federated_select` operator and gives a simple example of\n a custom federated algorithm that sends different data to different clients.\n\n- [Client-efficient large-model federated learning via federated_select and\n sparse aggregation](/federated/tutorials/sparse_federated_learning) shows how TFF can be\n used to train a very large model where each client device only downloads and\n updates a small part of the model, using\n `federated_language.federated_select` and sparse aggregation.\n\n- [Federated Learning with Differential Privacy in TFF](/federated/tutorials/federated_learning_with_differential_privacy)\n demonstrates how to use TFF to train models with user-level differential\n privacy."]]