tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager
Stay organized with collections
Save and categorize content based on your preferences.
Loss scale manager uses an exponential update strategy.
Inherits From: LossScaleManager
tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager(
init_loss_scale, incr_every_n_steps, decr_every_n_nan_or_inf=2, incr_ratio=2,
decr_ratio=0.8
)
In general, the strategy increases loss scale by a greater-than-one factor
after encountering a consecutive series of steps with finite gradients;
Similarly, it decreases the loss scale by a factor when the accumulated number
of steps with non-finite (nan or inf) gradients are met. An update is not
applied if its result is less than 1 or overflows the float32 dynamic range.
The number of finite and non-finite steps are cleared every time the loss
scale is changed. The condition to decrease the loss scale is looser than to
increase it since the former does not require the steps to be consecutive.
Args |
init_loss_scale
|
A Python float. The loss scale to use at the beginning.
|
incr_every_n_steps
|
Increases loss scale every n consecutive steps with
finite gradients.
|
decr_every_n_nan_or_inf
|
Decreases loss scale every n accumulated steps
with nan or inf gradients.
|
incr_ratio
|
The multiplier to use when increasing the loss scale.
|
decr_ratio
|
The less-than-one-multiplier to use when decreasing the loss
scale.
|
Methods
get_loss_scale
View source
get_loss_scale()
Returns the loss scale.
update_loss_scale
View source
update_loss_scale(
finite_grads
)
Updates loss scale based on if gradients are finite in current step.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L104-L200) |\n\nLoss scale manager uses an exponential update strategy.\n\nInherits From: [`LossScaleManager`](../../../tf/contrib/mixed_precision/LossScaleManager) \n\n tf.contrib.mixed_precision.ExponentialUpdateLossScaleManager(\n init_loss_scale, incr_every_n_steps, decr_every_n_nan_or_inf=2, incr_ratio=2,\n decr_ratio=0.8\n )\n\nIn general, the strategy increases loss scale by a greater-than-one factor\nafter encountering a consecutive series of steps with finite gradients;\nSimilarly, it decreases the loss scale by a factor when the accumulated number\nof steps with non-finite (nan or inf) gradients are met. An update is not\napplied if its result is less than 1 or overflows the float32 dynamic range.\n\nThe number of finite and non-finite steps are cleared every time the loss\nscale is changed. The condition to decrease the loss scale is looser than to\nincrease it since the former does not require the steps to be consecutive.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------------|---------------------------------------------------------------------------|\n| `init_loss_scale` | A Python float. The loss scale to use at the beginning. |\n| `incr_every_n_steps` | Increases loss scale every n consecutive steps with finite gradients. |\n| `decr_every_n_nan_or_inf` | Decreases loss scale every n accumulated steps with nan or inf gradients. |\n| `incr_ratio` | The multiplier to use when increasing the loss scale. |\n| `decr_ratio` | The less-than-one-multiplier to use when decreasing the loss scale. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_loss_scale`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L155-L157) \n\n get_loss_scale()\n\nReturns the loss scale.\n\n### `update_loss_scale`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L159-L200) \n\n update_loss_scale(\n finite_grads\n )\n\nUpdates loss scale based on if gradients are finite in current step."]]