View source on GitHub
|
Adds an Absolute Difference loss to the training procedure.
tf.compat.v1.losses.absolute_difference(
labels,
predictions,
weights=1.0,
scope=None,
loss_collection=ops.GraphKeys.LOSSES,
reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)
weights acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If weights is a Tensor of
shape [batch_size], then the total loss for each sample of the batch is
rescaled by the corresponding element in the weights vector. If the shape of
weights matches the shape of predictions, then the loss of each
measurable element of predictions is scaled by the corresponding value of
weights.
Returns | |
|---|---|
Weighted loss float Tensor. If reduction is NONE, this has the same
shape as labels; otherwise, it is scalar.
|
Raises | |
|---|---|
ValueError
|
If the shape of predictions doesn't match that of
labels or if the shape of weights is invalid or if labels
or predictions is None.
|
eager compatibility
The loss_collection argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a tf.keras.Model.
View source on GitHub