tf.contrib.mixed_precision.FixedLossScaleManager
Stay organized with collections
Save and categorize content based on your preferences.
Loss scale manager with a fixed loss scale.
Inherits From: LossScaleManager
tf.contrib.mixed_precision.FixedLossScaleManager(
loss_scale
)
The loss scale is not updated for the lifetime of the class.
Args |
loss_scale
|
A Python float. Its ideal value varies depending on models to
run. Choosing a too small loss_scale might affect model quality; a too
big loss_scale might cause inf or nan. There is no single right
loss_scale to apply. There is no harm choosing a relatively big number
as long as no nan or inf is encountered in training.
|
Raises |
ValueError
|
If loss_scale is less than 1.
|
Methods
get_loss_scale
View source
get_loss_scale()
Returns the loss scale as a scalar float32
tensor.
update_loss_scale
View source
update_loss_scale(
finite_grads
)
Updates loss scale based on if gradients are finite in current step.
Args |
finite_grads
|
bool scalar tensor indicating if all gradients are
finite (i.e., not inf or nan).
|
Returns |
An op, when executed updates the loss scale. If eager execution is
enabled, does not return anything.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.mixed_precision.FixedLossScaleManager\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L73-L101) |\n\nLoss scale manager with a fixed loss scale.\n\nInherits From: [`LossScaleManager`](../../../tf/contrib/mixed_precision/LossScaleManager) \n\n tf.contrib.mixed_precision.FixedLossScaleManager(\n loss_scale\n )\n\nThe loss scale is not updated for the lifetime of the class.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `loss_scale` | A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------|\n| `ValueError` | If loss_scale is less than 1. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_loss_scale`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L96-L97) \n\n get_loss_scale()\n\nReturns the loss scale as a scalar `float32` tensor.\n\n### `update_loss_scale`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py#L99-L101) \n\n update_loss_scale(\n finite_grads\n )\n\nUpdates loss scale based on if gradients are finite in current step.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------|-----------------------------------------------------------------------------------|\n| `finite_grads` | bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An op, when executed updates the loss scale. If eager execution is enabled, does not return anything. ||\n\n\u003cbr /\u003e"]]