|  TensorFlow 1 version |  View source on GitHub | 
Optimizer that implements the NAdam algorithm.
Inherits From: Optimizer
tf.keras.optimizers.Nadam(
    learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
    name='Nadam', **kwargs
)
Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.
| Args | |
|---|---|
| learning_rate | A Tensor or a floating point value. The learning rate. | 
| beta_1 | A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates. | 
| beta_2 | A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm. | 
| epsilon | A small constant for numerical stability. | 
| name | Optional name for the operations created when applying gradients.
Defaults to "Nadam". | 
| **kwargs | Keyword arguments. Allowed to be one of "clipnorm"or"clipvalue"."clipnorm"(float) clips gradients by norm;"clipvalue"(float) clips
gradients by value. | 
Usage Example:
opt = tf.keras.optimizers.Nadam(learning_rate=0.2)var1 = tf.Variable(10.0)loss = lambda: (var1 ** 2) / 2.0step_count = opt.minimize(loss, [var1]).numpy()"{:.1f}".format(var1.numpy())9.8
Reference:
| Raises | |
|---|---|
| ValueError | in case of any invalid argument. |