|  TensorFlow 1 version |  View source on GitHub | 
Loss base class.
tf.keras.losses.Loss(
    reduction=losses_utils.ReductionV2.AUTO, name=None
)
To be implemented by subclasses:
- call(): Contains the logic for loss calculation using- y_true,- y_pred.
Example subclass implementation:
class MeanSquaredError(Loss):
  def call(self, y_true, y_pred):
    y_pred = tf.convert_to_tensor_v2(y_pred)
    y_true = tf.cast(y_true, y_pred.dtype)
    return tf.reduce_mean(math_ops.square(y_pred - y_true), axis=-1)
When used with tf.distribute.Strategy, outside of built-in training loops
such as tf.keras compile and fit, please use 'SUM' or 'NONE' reduction
types, and reduce losses explicitly in your training loop. Using 'AUTO' or
'SUM_OVER_BATCH_SIZE' will raise an error.
Please see this custom training tutorial for more details on this.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
with strategy.scope():
  loss_obj = tf.keras.losses.CategoricalCrossentropy(
      reduction=tf.keras.losses.Reduction.NONE)
  ....
  loss = (tf.reduce_sum(loss_obj(labels, predictions)) *
          (1. / global_batch_size))
| Args | |
|---|---|
| reduction | Type of tf.keras.losses.Reductionto apply to
loss. Default value isAUTO.AUTOindicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults toSUM_OVER_BATCH_SIZE. When used withtf.distribute.Strategy, outside of built-in training loops such astf.kerascompileandfit, usingAUTOorSUM_OVER_BATCH_SIZEwill raise an error. Please see this custom training tutorial for
    more details. | 
| name | Optional name for the instance. | 
Methods
call
@abc.abstractmethodcall( y_true, y_pred )
Invokes the Loss instance.
| Args | |
|---|---|
| y_true | Ground truth values. shape = [batch_size, d0, .. dN], except
sparse loss functions such as sparse categorical crossentropy where
shape =[batch_size, d0, .. dN-1] | 
| y_pred | The predicted values. shape = [batch_size, d0, .. dN] | 
| Returns | |
|---|---|
| Loss values with the shape [batch_size, d0, .. dN-1]. | 
from_config
@classmethodfrom_config( config )
Instantiates a Loss from its config (output of get_config()).
| Args | |
|---|---|
| config | Output of get_config(). | 
| Returns | |
|---|---|
| A Lossinstance. | 
get_config
get_config()
Returns the config dictionary for a Loss instance.
__call__
__call__(
    y_true, y_pred, sample_weight=None
)
Invokes the Loss instance.
| Args | |
|---|---|
| y_true | Ground truth values. shape = [batch_size, d0, .. dN], except
sparse loss functions such as sparse categorical crossentropy where
shape =[batch_size, d0, .. dN-1] | 
| y_pred | The predicted values. shape = [batch_size, d0, .. dN] | 
| sample_weight | Optional sample_weightacts as a coefficient for the
loss. If a scalar is provided, then the loss is simply scaled by the
given value. Ifsample_weightis a tensor of size[batch_size], then
the total loss for each sample of the batch is rescaled by the
corresponding element in thesample_weightvector. If the shape ofsample_weightis[batch_size, d0, .. dN-1](or can be broadcasted to
this shape), then each loss element ofy_predis scaled
by the corresponding value ofsample_weight. (Note ondN-1: all loss
  functions reduce by 1 dimension, usually axis=-1.) | 
| Returns | |
|---|---|
| Weighted loss float Tensor. IfreductionisNONE, this has
shape[batch_size, d0, .. dN-1]; otherwise, it is scalar. (NotedN-1because all loss functions reduce by 1 dimension, usually axis=-1.) | 
| Raises | |
|---|---|
| ValueError | If the shape of sample_weightis invalid. |