AUTO: Indicates that the reduction option will be determined by the usage
context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When
used with tf.distribute.Strategy, outside of built-in training loops such
as tf.kerascompile and fit, we expect reduction value to be
SUM or NONE. Using AUTO in that case will raise an error.
NONE: No additional reduction is applied to the output of the wrapped
loss function. When non-scalar losses are returned to Keras functions like
fit/evaluate, the unreduced vector loss is passed to the optimizer
but the reported loss will be a scalar value.
SUM: Scalar sum of weighted losses.
SUM_OVER_BATCH_SIZE: Scalar SUM divided by number of elements in losses.
This reduction type is not supported when used with
tf.distribute.Strategy outside of built-in training loops like tf.kerascompile/fit.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
[null,null,["Last updated 2022-09-07 UTC."],[],[],null,["# tf.keras.losses.Reduction\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.7.0/keras/utils/losses_utils.py#L24-L84) |\n\nTypes of loss reduction.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction)\n\n\u003cbr /\u003e\n\nContains the following values:\n\n- `AUTO`: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When used with [`tf.distribute.Strategy`](../../../tf/distribute/Strategy), outside of built-in training loops such as [`tf.keras`](../../../tf/keras) `compile` and `fit`, we expect reduction value to be `SUM` or `NONE`. Using `AUTO` in that case will raise an error.\n- `NONE`: No **additional** reduction is applied to the output of the wrapped\n loss function. When non-scalar losses are returned to Keras functions like\n `fit`/`evaluate`, the unreduced vector loss is passed to the optimizer\n but the reported loss will be a scalar value.\n\n | **Caution:** **Verify the shape of the outputs when using** [`Reduction.NONE`](../../../tf/keras/losses/Reduction#NONE). The builtin loss functions wrapped by the loss classes reduce one dimension (`axis=-1`, or `axis` if specified by loss function). [`Reduction.NONE`](../../../tf/keras/losses/Reduction#NONE) just means that no **additional** reduction is applied by the class wrapper. For categorical losses with an example input shape of `[batch, W, H, n_classes]` the `n_classes` dimension is reduced. For pointwise losses your must include a dummy axis so that `[batch, W, H, 1]` is reduced to `[batch, W, H]`. Without the dummy axis `[batch, W, H]` will be incorrectly reduced to `[batch, W]`.\n- `SUM`: Scalar sum of weighted losses.\n\n- `SUM_OVER_BATCH_SIZE`: Scalar `SUM` divided by number of elements in losses.\n This reduction type is not supported when used with\n [`tf.distribute.Strategy`](../../../tf/distribute/Strategy) outside of built-in training loops like [`tf.keras`](../../../tf/keras)\n `compile`/`fit`.\n\n You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: \n\n with strategy.scope():\n loss_obj = tf.keras.losses.CategoricalCrossentropy(\n reduction=tf.keras.losses.Reduction.NONE)\n ....\n loss = tf.reduce_sum(loss_obj(labels, predictions)) *\n (1. / global_batch_size)\n\nPlease see the [custom training guide](https://www.tensorflow.org/tutorials/distribute/custom_training) for more\ndetails on this.\n\nMethods\n-------\n\n### `all`\n\n[View source](https://github.com/keras-team/keras/tree/v2.7.0/keras/utils/losses_utils.py#L76-L78) \n\n @classmethod\n all()\n\n### `validate`\n\n[View source](https://github.com/keras-team/keras/tree/v2.7.0/keras/utils/losses_utils.py#L80-L84) \n\n @classmethod\n validate(\n key\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|---------------------|-------------------------|\n| AUTO | `'auto'` |\n| NONE | `'none'` |\n| SUM | `'sum'` |\n| SUM_OVER_BATCH_SIZE | `'sum_over_batch_size'` |\n\n\u003cbr /\u003e"]]