tf.compat.v1.mixed_precision.MixedPrecisionLossScaleOptimizer

An optimizer that applies loss scaling.

Inherits From: Optimizer

Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is:

loss = ...
loss *= loss_scale
grads = gradients(loss, vars)
grads /= loss_scale

Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used for mixed precision training. By multiplying the loss, each intermediate gradient will have the same multiplier applied.

The loss scale can either be a fixed constant, chosen by the user, or be dynamically determined. Dynamically determining the loss scale is convenient as a loss scale does not have to be explicitly chosen. However it reduces performance.

This optimizer wraps another optimizer and applies loss scaling to it via a LossScale. Loss scaling is applied whenever gradients are computed, such as through minimize().

use_locking Bool. If True apply use locks to prevent concurrent updates to variables.
name A non-empty string. The name to use for accumulators created for the optimizer.

ValueError If name is malformed.

Methods

apply_gradients

View source

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that conditionally applies gradients if all gradient values are finite. Otherwise no update is performed (nor is global_step incremented).

Args
grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients().
global_step Optional Variable to increment by one after the variables have been updated.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.

Returns
An Operation that conditionally applies the specified gradients. If global_step was not None, that operation also increments global_step.

Raises
RuntimeError If you should use _distributed_apply() instead.