Module: tff.learning.optimizers
Stay organized with collections
Save and categorize content based on your preferences.
Libraries for optimization algorithms.
Classes
class Optimizer
: Represents an optimizer for use in TensorFlow Federated.
Functions
build_adafactor(...)
: Builds an Adafactor optimizer.
build_adagrad(...)
: Returns a tff.learning.optimizers.Optimizer
for Adagrad.
build_adam(...)
: Returns a tff.learning.optimizers.Optimizer
for Adam.
build_adamw(...)
: Returns a tff.learning.optimizers.Optimizer
for AdamW.
build_rmsprop(...)
: Returns a tff.learning.optimizers.Optimizer
for RMSprop.
build_sgdm(...)
: Returns a tff.learning.optimizers.Optimizer
for momentum SGD.
build_yogi(...)
: Returns a tff.learning.optimizers.Optimizer
for Yogi.
check_weights_gradients_match(...)
: Checks that weights and non-none gradients match.
handle_indexed_slices_gradients(...)
: Converts any tf.IndexedSlices
to tensors.
schedule_learning_rate(...)
: Returns an optimizer with scheduled learning rate.
Other Members |
LEARNING_RATE_KEY
|
'learning_rate'
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-20 UTC.
[null,null,["Last updated 2024-09-20 UTC."],[],[],null,["# Module: tff.learning.optimizers\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/federated/blob/v0.87.0 Version 2.0, January 2004 Licensed under the Apache License, Version 2.0 (the) |\n\nLibraries for optimization algorithms.\n\nClasses\n-------\n\n[`class Optimizer`](../../tff/learning/optimizers/Optimizer): Represents an optimizer for use in TensorFlow Federated.\n\nFunctions\n---------\n\n[`build_adafactor(...)`](../../tff/learning/optimizers/build_adafactor): Builds an Adafactor optimizer.\n\n[`build_adagrad(...)`](../../tff/learning/optimizers/build_adagrad): Returns a [`tff.learning.optimizers.Optimizer`](../../tff/learning/optimizers/Optimizer) for Adagrad.\n\n[`build_adam(...)`](../../tff/learning/optimizers/build_adam): Returns a [`tff.learning.optimizers.Optimizer`](../../tff/learning/optimizers/Optimizer) for Adam.\n\n[`build_adamw(...)`](../../tff/learning/optimizers/build_adamw): Returns a [`tff.learning.optimizers.Optimizer`](../../tff/learning/optimizers/Optimizer) for AdamW.\n\n[`build_rmsprop(...)`](../../tff/learning/optimizers/build_rmsprop): Returns a [`tff.learning.optimizers.Optimizer`](../../tff/learning/optimizers/Optimizer) for RMSprop.\n\n[`build_sgdm(...)`](../../tff/learning/optimizers/build_sgdm): Returns a [`tff.learning.optimizers.Optimizer`](../../tff/learning/optimizers/Optimizer) for momentum SGD.\n\n[`build_yogi(...)`](../../tff/learning/optimizers/build_yogi): Returns a [`tff.learning.optimizers.Optimizer`](../../tff/learning/optimizers/Optimizer) for Yogi.\n\n[`check_weights_gradients_match(...)`](../../tff/learning/optimizers/check_weights_gradients_match): Checks that weights and non-none gradients match.\n\n[`handle_indexed_slices_gradients(...)`](../../tff/learning/optimizers/handle_indexed_slices_gradients): Converts any [`tf.IndexedSlices`](https://www.tensorflow.org/api_docs/python/tf/IndexedSlices) to tensors.\n\n[`schedule_learning_rate(...)`](../../tff/learning/optimizers/schedule_learning_rate): Returns an optimizer with scheduled learning rate.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Other Members ------------- ||\n|-------------------|-------------------|\n| LEARNING_RATE_KEY | `'learning_rate'` |\n\n\u003cbr /\u003e"]]