tf.distribute.get_loss_reduction
Stay organized with collections
Save and categorize content based on your preferences.
tf.distribute.ReduceOp
corresponding to the last loss reduction.
tf.distribute.get_loss_reduction()
This is used to decide whether loss should be scaled in optimizer (used only
for estimator + v1 optimizer use case).
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.distribute.get_loss_reduction\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/distribute/distribute_lib.py#L172-L191) |\n\n[`tf.distribute.ReduceOp`](../../tf/distribute/ReduceOp) corresponding to the last loss reduction.\n\n#### View aliases\n\n\n**Main aliases**\n\n\\`tf.contrib.distribute.get_loss_reduction\\`\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.distribute.get_loss_reduction`](/api_docs/python/tf/compat/v1/distribute/get_loss_reduction)\n\n\u003cbr /\u003e\n\n tf.distribute.get_loss_reduction()\n\nThis is used to decide whether loss should be scaled in optimizer (used only\nfor estimator + v1 optimizer use case).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| [`tf.distribute.ReduceOp`](../../tf/distribute/ReduceOp) corresponding to the last loss reduction for estimator and v1 optimizer use case. [`tf.distribute.ReduceOp.SUM`](../../tf/distribute/ReduceOp#SUM) otherwise. ||\n\n\u003cbr /\u003e"]]