|View source on GitHub|
An optimizer that applies loss scaling.
Compat aliases for migration
See Migration guide for more details.
tf.compat.v1.mixed_precision.MixedPrecisionLossScaleOptimizer( opt, loss_scale )
Loss scaling is a process that multiplies the loss by a multiplier called the loss scale, and divides each gradient by the same multiplier. The pseudocode for this process is:
loss = ... loss *= loss_scale grads = gradients(loss, vars) grads /= loss_scale
Mathematically, loss scaling has no effect, but can help avoid numerical underflow in intermediate gradients when float16 tensors are used for mixed precision training. By multiplying the loss, each intermediate gradient will have the same multiplier applied.
The loss scale can either be a fixed constant, chosen by the user, or be dynamically determined. Dynamically determining the loss scale is convenient as a loss scale does not have to be explicitly chosen. However it reduces performance.
This optimizer wraps another optimizer and applies loss scaling to it via a
LossScale. Loss scaling is applied whenever gradients are
computed, such as through
||Bool. If True apply use locks to prevent concurrent updates to variables.|
||A non-empty string. The name to use for accumulators created for the optimizer.|
||If name is malformed.|
apply_gradients( grads_and_vars, global_step=None, name=None )
Apply gradients to variables.
This is the second part of
minimize(). It returns an
conditionally applies gradients if all gradient values are finite.
Otherwise no update is performed (nor is
List of (gradient, variable) pairs as returned by
Optional name for the returned operation. Default to the name
passed to the
If you should use