Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge

tf.keras.mixed_precision.experimental.LossScaleOptimizer

TensorFlow 1 version View source on GitHub

An optimizer that applies loss scaling.

Inherits From: Optimizer

Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is:

loss = ...
loss *= loss_scale
grads = gradients(loss, vars)
grads /= loss_scale

Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used. By multiplying the loss, each intermediate gradient will have the same multiplier applied.

The loss scale can either be a fixed constant, chosen by the user, or be dynamically determined. Dynamically determining the loss scale is convenient as a loss scale does not have to be explicitly chosen. However it reduces performance.

This optimizer wraps another optimizer and applies loss scaling to it via a LossScale. Loss scaling is applied whenever gradients are computed, either through minimize() or get_gradients(). The loss scale is updated via LossScale.update() whenever gradients are applied, either through minimize() or apply_gradients(). For example:

opt = tf.keras.optimizers.SGD(0.25)
opt = tf.keras.mixed_precision.experimental.LossScaleOptimizer(opt,
                                                               "dynamic")
var = tf.Variable(1.)
loss_fn = lambda: var ** 2
# 'minimize' applies loss scaling to the loss and updates the loss sale.
opt.minimize(loss_fn, var_list=var)
var.numpy()
0.5

If a tf.GradientTape is used to compute gradients instead of LossScaleOptimizer.minimize or LossScaleOptimizer.get_gradients, the loss and gradients must be scaled manually. This can be done by calling LossScaleOptimizer.get_scaled_loss before passing the loss to tf.GradientTape, and LossScaleOptimizer.get_unscaled_gradients after computing the gradients with tf.GradientTape. For example:

with tf.GradientTape() as tape:
  loss = loss_fn()
  scaled_loss = opt.get_scaled_loss(loss)
scaled_grad = tape.gradient(scaled_loss, var)
(grad,) = opt.get_unscaled_gradients([scaled_grad])
opt.apply_gradients([(grad, var)])  # Loss scale is updated here
var.numpy()
0.25

optimizer The Optimizer instance to wrap.
loss_scale The loss scale to scale the loss and gradients. This can either be an int/float to use a fixed loss scale, the string "dynamic" to use dynamic loss scaling, or an instance of a LossScale. The string "dynamic" equivalent to passing DynamicLossScale(), and passing an int/float is equivalent to passing a FixedLossScale with the given loss scale.

iterations Variable. The number of training steps this Optimizer has run.
learning_rate

loss_scale The LossScale instance associated with this optimizer.
lr

weights Returns variables of this Optimizer based on the order created.

Methods

add_slot

View source

Add a new slot variable for var.

add_weight

View source

apply_gradients

View source

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

The method sums gradients from all replicas in the presence of tf.distribute.Strategy by default. You can aggregate gradients yourself by passing experimental_aggregate_gradients=False.

Example:

grads = tape.gradient(loss, vars)
grads = tf.distribute.get_replica_context().all_reduce('sum', grads)
# Processing aggregated gradients.
optimizer.apply_gradients(zip(grads, vars),
    experimental_aggregate_gradients=False)

Args
grads_and_vars List of (gradient, variable) pairs.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.
experimental_aggregate_gradients Whether to sum gradients from different replicas in the presense of tf.distribute.Strategy. If False, it's user responsibility to aggregate the gradients. Default to True.

Returns
An Operation that applies the specified gradients. The iterations will be automatically increased by 1.

Raises
TypeError If grads_and_vars is malformed.
ValueError If none of the variables have gradients.

from_config

View source

Creates an optimizer from its config.

This method is the reverse of