TensorFlow 1 version | View source on GitHub |
Optimizer that implements the NAdam algorithm.
Inherits From: Optimizer
tf.keras.optimizers.Nadam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Nadam',
**kwargs
)
Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.
Initialization:
Computes:
gradient is evaluated at theta(t) + momentum * v(t), and the variables always store theta + beta_1 * m / sqrt(v) instead of theta.
References See Dozat, T., 2015.
Args | |
---|---|
learning_rate
|
A Tensor or a floating point value. The learning rate. |
beta_1
|
A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates. |
beta_2
|
A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm. |
epsilon
|
A small constant for numerical stability. |
name
|
Optional name for the operations created when applying gradients. Defaults to "Adamax". |
**kwargs
|
keyword arguments. Allowed to be {clipnorm , clipvalue , lr ,
decay }. clipnorm is clip gradients by norm; clipvalue is clip
gradients by value, decay is included for backward compatibility to
allow time inverse decay of learning rate. lr is included for backward
compatibility, recommended to use learning_rate instead.
|
Attributes | |
---|---|
iterations
|
Variable. The number of training steps this Optimizer has run. |
weights
|
Returns variables of this Optimizer based on the order created. |
Methods
add_slot
add_slot(
var, slot_name, initializer='zeros'
)
Add a new slot variable for var
.
add_weight
add_weight(
name, shape, dtype=None, initializer='zeros', trainable=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE
)
apply_gradients
apply_gradients(
grads_and_vars, name=None, experimental_aggregate_gradients=True
)
Apply gradients to variables.
This is the second part of minimize()
. It returns an Operation
that
applies gradients.
The method sums gradients from all replicas in the presence of
tf.distribute.Strategy
by default. You can aggregate gradients yourself by
passing experimental_aggregate_gradients=False
.
Example:
grads = tape.gradient(loss, vars)
grads = tf.distribute.get_replica_context().all_reduce('sum', grads)
# Processing aggregated gradients.
optimizer.apply_gradients(zip(grads, vars),
experimental_aggregate_gradients=False)
Args | |
---|---|
grads_and_vars
|
List of (gradient, variable) pairs. |
name
|
Optional name for the returned operation. Default to the name passed
to the Optimizer constructor.
|
experimental_aggregate_gradients
|
Whether to sum gradients from different
replicas in the presense of tf.distribute.Strategy . If False, it's
user responsibility to aggregate the gradients. Default to True.
|
Returns | |
---|---|
An Operation that applies the specified gradients. The iterations
will be automatically increased by 1.
|
Raises | |
---|---|
TypeError
|
If grads_and_vars is malformed.
|
ValueError
|
If none of the variables have gradients. |
from_config
@classmethod
from_config( config, custom_objects=None )
Creates an optimizer from its config.
This method is the reverse of get_config
,
capable of instantiating the same optimizer from the config
dictionary.
Arguments | |
---|---|
config
|
A Python dictionary, typically the output of get_config. |
custom_objects
|
A Python dictionary mapping names to additional Python objects used to create this optimizer, such as a function used for a hyperparameter. |
Returns | |
---|---|
An optimizer instance. |
get_config
get_config()
Returns the config of the optimizer.
An optimizer config is a Python dictionary (serializable) containing the configuration of an optimizer. The same optimizer can be reinstantiated later (without any saved state) from this configuration.
Returns | |
---|---|
Python dictionary. |
get_gradients
get_gradients(
loss, params
)
Returns gradients of loss
with respect to params
.
Arguments | |
---|---|
loss
|
Loss tensor. |
params
|
List of variables. |
Returns | |
---|---|
List of gradient tensors. |
Raises | |
---|---|
ValueError
|
In case any gradient cannot be computed (e.g. if gradient function not implemented). |
get_slot
get_slot(
var, slot_name
)
get_slot_names
get_slot_names()
A list of names for this optimizer's slots.
get_updates
get_updates(
loss, params
)
get_weights
get_weights()
Returns the current weights of the optimizer.
The weights of an optimizer are its state (ie, variables). This function returns the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer's state variables in the order they were created. The returned list can in turn be used to load state into similarly parameterized optimizers.
For example, the RMSprop optimizer for this simple model returns a list of three values-- the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:
opt = tf.keras.optimizers.RMSprop()
m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
m.compile(opt, loss='mse')
data = np.arange(100).reshape(5, 20)
labels = np.zeros(5)
print('Training'); results = m.fit(data, labels)
Training ...
len(opt.get_weights())
3
Returns | |
---|---|
Weights values as a list of numpy arrays. |
minimize
minimize(
loss, var_list, grad_loss=None, name=None
)
Minimize loss
by updating var_list
.
This method simply computes gradient using tf.GradientTape
and calls
apply_gradients()
. If you want to process the gradient before applying
then call tf.GradientTape
and apply_gradients()
explicitly instead
of using this function.
Args | |
---|---|
loss
|
A callable taking no arguments which returns the value to minimize. |
var_list
|
list or tuple of Variable objects to update to minimize
loss , or a callable returning the list or tuple of Variable objects.
Use callable when the variable list would otherwise be incomplete before
minimize since the variables are created at the first time loss is
called.
|
grad_loss
|
Optional. A Tensor holding the gradient computed for loss .
|
name
|
Optional name for the returned operation. |
Returns | |
---|---|
An Operation that updates the variables in var_list . The iterations
will be automatically increased by 1.
|
Raises | |
---|---|
ValueError
|
If some of the variables are not Variable objects.
|
set_weights
set_weights(
weights
)
Set the weights of the optimizer.
The weights of an optimizer are its state (ie, variables). This function takes the weight values associated with this optimizer as a list of Numpy arrays. The first value is always the iterations count of the optimizer, followed by the optimizer's state variables in the order they are created. The passed values are used to set the new state of the optimizer.
For example, the RMSprop optimizer for this simple model takes a list of three values-- the iteration count, followed by the root-mean-square value of the kernel and bias of the single Dense layer:
opt = tf.keras.optimizers.RMSprop()
m = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
m.compile(opt, loss='mse')
data = np.arange(100).reshape(5, 20)
labels = np.zeros(5)
print('Training'); results = m.fit(data, labels)
Training ...
new_weights = [np.array(10), np.ones([20, 10]), np.zeros([10])]
opt.set_weights(new_weights)
opt.iterations
<tf.Variable 'RMSprop/iter:0' shape=() dtype=int64, numpy=10>
Arguments | |
---|---|
weights
|
weight values as a list of numpy arrays. |
variables
variables()
Returns variables of this Optimizer based on the order created.