tf.contrib.mixed_precision.LossScaleManager
Stay organized with collections
Save and categorize content based on your preferences.
Abstract loss scale manager class.
Loss scale managers with a different strategy should subclass this class.
Loss scaling is a process that:
1) Applies a multiplier on the loss before computing gradients, and
2) Applies the reciprocal of the multiplier on the gradients before they are
applied on variables.
This class is used together with
tf.contrib.mixed_precision.LossScaleOptimizer
for mixed precision training
(float32 variables and float16 ops) on Nvidia GPUs in order to achieve the
same model quality as single precision training, with the benefits of
potential higher throughput.
See tf.contrib.mixed_precision.LossScaleOptimizer
for more details.
Methods
get_loss_scale
View source
@abc.abstractmethod
get_loss_scale()
Returns the loss scale as a scalar float32
tensor.
update_loss_scale
View source
@abc.abstractmethod
update_loss_scale(
finite_grads
)
Updates loss scale based on if gradients are finite in current step.
Args |
finite_grads
|
bool scalar tensor indicating if all gradients are
finite (i.e., not inf or nan).
|
Returns |
An op, when executed updates the loss scale. If eager execution is
enabled, does not return anything.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.mixed_precision.LossScaleManager\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L33-L70) |\n\nAbstract loss scale manager class.\n\nLoss scale managers with a different strategy should subclass this class.\nLoss scaling is a process that:\n\n1) Applies a multiplier on the loss before computing gradients, and\n2) Applies the reciprocal of the multiplier on the gradients before they are\napplied on variables.\n\nThis class is used together with\n[`tf.contrib.mixed_precision.LossScaleOptimizer`](../../../tf/contrib/mixed_precision/LossScaleOptimizer) for mixed precision training\n(float32 variables and float16 ops) on Nvidia GPUs in order to achieve the\nsame model quality as single precision training, with the benefits of\npotential higher throughput.\n\nSee [`tf.contrib.mixed_precision.LossScaleOptimizer`](../../../tf/contrib/mixed_precision/LossScaleOptimizer) for more details.\n\nMethods\n-------\n\n### `get_loss_scale`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L52-L55) \n\n @abc.abstractmethod\n get_loss_scale()\n\nReturns the loss scale as a scalar `float32` tensor.\n\n### `update_loss_scale`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L57-L70) \n\n @abc.abstractmethod\n update_loss_scale(\n finite_grads\n )\n\nUpdates loss scale based on if gradients are finite in current step.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------|-----------------------------------------------------------------------------------|\n| `finite_grads` | bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An op, when executed updates the loss scale. If eager execution is enabled, does not return anything. ||\n\n\u003cbr /\u003e"]]