tf.keras.optimizers.Adam
Stay organized with collections
Save and categorize content based on your preferences.
Optimizer that implements the Adam algorithm.
Inherits From: Optimizer
tf.keras.optimizers.Adam(
learning_rate=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-07,
amsgrad=False,
name='Adam',
**kwargs
)
Adam optimization is a stochastic gradient descent method that is based on
adaptive estimation of first-order and second-order moments.
According to
Kingma et al., 2014,
the method is "computationally
efficient, has little memory requirement, invariant to diagonal rescaling of
gradients, and is well suited for problems that are large in terms of
data/parameters".
Args |
learning_rate
|
A Tensor , floating point value, or a schedule that is a
tf.keras.optimizers.schedules.LearningRateSchedule , or a callable
that takes no arguments and returns the actual value to use, The
learning rate. Defaults to 0.001.
|
beta_1
|
A float value or a constant float tensor, or a callable
that takes no arguments and returns the actual value to use. The
exponential decay rate for the 1st moment estimates. Defaults to 0.9.
|
beta_2
|
A float value or a constant float tensor, or a callable
that takes no arguments and returns the actual value to use, The
exponential decay rate for the 2nd moment estimates. Defaults to 0.999.
|
epsilon
|
A small constant for numerical stability. This epsilon is
"epsilon hat" in the Kingma and Ba paper (in the formula just before
Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to
1e-7.
|
amsgrad
|
Boolean. Whether to apply AMSGrad variant of this algorithm from
the paper "On the Convergence of Adam and beyond". Defaults to False .
|
name
|
Optional name for the operations created when applying gradients.
Defaults to "Adam" .
|
**kwargs
|
Keyword arguments. Allowed to be one of
"clipnorm" or "clipvalue" .
"clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips
gradients by value.
|
Usage:
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
var1 = tf.Variable(10.0)
loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1
step_count = opt.minimize(loss, [var1]).numpy()
# The first step is `-learning_rate*sign(grad)`
var1.numpy()
9.9
Notes:
The default value of 1e-7 for epsilon might not be a good default in
general. For example, when training an Inception network on ImageNet a
current good choice is 1.0 or 0.1. Note that since Adam uses the
formulation just before Section 2.1 of the Kingma and Ba paper rather than
the formulation in Algorithm 1, the "epsilon" referred to here is "epsilon
hat" in the paper.
The sparse implementation of this algorithm (used when the gradient is an
IndexedSlices object, typically because of tf.gather
or an embedding
lookup in the forward pass) does apply momentum to variable slices even if
they were not used in the forward pass (meaning they have a gradient equal
to zero). Momentum decay (beta1) is also applied to the entire momentum
accumulator. This means that the sparse behavior is equivalent to the dense
behavior (in contrast to some momentum implementations which ignore momentum
unless a variable slice was actually used).
Raises |
ValueError
|
in case of any invalid argument.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2022-10-27 UTC.
[null,null,["Last updated 2022-10-27 UTC."],[],[],null,["# tf.keras.optimizers.Adam\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.8.0/keras/optimizer_v2/adam.py#L23-L243) |\n\nOptimizer that implements the Adam algorithm.\n\nInherits From: [`Optimizer`](../../../tf/keras/optimizers/Optimizer)\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.optimizers.Adam`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.optimizers.Adam`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/legacy/Adam)\n\n\u003cbr /\u003e\n\n tf.keras.optimizers.Adam(\n learning_rate=0.001,\n beta_1=0.9,\n beta_2=0.999,\n epsilon=1e-07,\n amsgrad=False,\n name='Adam',\n **kwargs\n )\n\nAdam optimization is a stochastic gradient descent method that is based on\nadaptive estimation of first-order and second-order moments.\n\nAccording to\n[Kingma et al., 2014](http://arxiv.org/abs/1412.6980),\nthe method is \"*computationally\nefficient, has little memory requirement, invariant to diagonal rescaling of\ngradients, and is well suited for problems that are large in terms of\ndata/parameters*\".\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `learning_rate` | A `Tensor`, floating point value, or a schedule that is a [`tf.keras.optimizers.schedules.LearningRateSchedule`](../../../tf/keras/optimizers/schedules/LearningRateSchedule), or a callable that takes no arguments and returns the actual value to use, The learning rate. Defaults to 0.001. |\n| `beta_1` | A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use. The exponential decay rate for the 1st moment estimates. Defaults to 0.9. |\n| `beta_2` | A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use, The exponential decay rate for the 2nd moment estimates. Defaults to 0.999. |\n| `epsilon` | A small constant for numerical stability. This epsilon is \"epsilon hat\" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to 1e-7. |\n| `amsgrad` | Boolean. Whether to apply AMSGrad variant of this algorithm from the paper \"On the Convergence of Adam and beyond\". Defaults to `False`. |\n| `name` | Optional name for the operations created when applying gradients. Defaults to `\"Adam\"`. |\n| `**kwargs` | Keyword arguments. Allowed to be one of `\"clipnorm\"` or `\"clipvalue\"`. `\"clipnorm\"` (float) clips gradients by norm; `\"clipvalue\"` (float) clips gradients by value. |\n\n\u003cbr /\u003e\n\n#### Usage:\n\n opt = tf.keras.optimizers.Adam(learning_rate=0.1)\n var1 = tf.Variable(10.0)\n loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1\n step_count = opt.minimize(loss, [var1]).numpy()\n # The first step is `-learning_rate*sign(grad)`\n var1.numpy()\n 9.9\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Reference --------- ||\n|---|---|\n| \u003cbr /\u003e - [Kingma et al., 2014](http://arxiv.org/abs/1412.6980) - [Reddi et al., 2018](https://openreview.net/pdf?id=ryQu7f-RZ) for `amsgrad`. ||\n\n\u003cbr /\u003e\n\n#### Notes:\n\nThe default value of 1e-7 for epsilon might not be a good default in\ngeneral. For example, when training an Inception network on ImageNet a\ncurrent good choice is 1.0 or 0.1. Note that since Adam uses the\nformulation just before Section 2.1 of the Kingma and Ba paper rather than\nthe formulation in Algorithm 1, the \"epsilon\" referred to here is \"epsilon\nhat\" in the paper.\n\nThe sparse implementation of this algorithm (used when the gradient is an\nIndexedSlices object, typically because of [`tf.gather`](../../../tf/gather) or an embedding\nlookup in the forward pass) does apply momentum to variable slices even if\nthey were not used in the forward pass (meaning they have a gradient equal\nto zero). Momentum decay (beta1) is also applied to the entire momentum\naccumulator. This means that the sparse behavior is equivalent to the dense\nbehavior (in contrast to some momentum implementations which ignore momentum\nunless a variable slice was actually used).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------|\n| `ValueError` | in case of any invalid argument. |\n\n\u003cbr /\u003e"]]