|  View source on GitHub | 
Types of loss reduction.
Contains the following values:
- AUTO: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to- SUM_OVER_BATCH_SIZE. When used with- tf.distribute.Strategy, outside of built-in training loops such as- tf.keras- compileand- fit, we expect reduction value to be- SUMor- NONE. Using- AUTOin that case will raise an error.
- NONE: No additional reduction is applied to the output of the wrapped loss function. When non-scalar losses are returned to Keras functions like- fit/- evaluate, the unreduced vector loss is passed to the optimizer but the reported loss will be a scalar value.
- SUM: Scalar sum of weighted losses.
- SUM_OVER_BATCH_SIZE: Scalar- SUMdivided by number of elements in losses. This reduction type is not supported when used with- tf.distribute.Strategyoutside of built-in training loops like- tf.keras- compile/- fit.- You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: - with strategy.scope(): loss_obj = tf.keras.losses.CategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) .... loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
Please see the custom training guide for more details on this.
Methods
all
@classmethodall()
validate
@classmethodvalidate( key )
| Class Variables | |
|---|---|
| AUTO | 'auto' | 
| NONE | 'none' | 
| SUM | 'sum' | 
| SUM_OVER_BATCH_SIZE | 'sum_over_batch_size' |