tfr.keras.losses.PairwiseHingeLoss

Computes pairwise hinge loss between y_true and y_pred.

For each list of scores s in y_pred and list of labels y in y_true:

loss = sum_i sum_j I[y_i > y_j] * max(0, 1 - (s_i - s_j))

Standalone usage:

y_true = [[1., 0.]]
y_pred = [[0.6, 0.8]]
loss = tfr.keras.losses.PairwiseHingeLoss()
loss(y_true, y_pred).numpy()
0.6
# Using ragged tensors
y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])
y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])
loss = tfr.keras.losses.PairwiseHingeLoss(ragged=True)
loss(y_true, y_pred).numpy()
0.41666666

Usage with the compile() API:

model.compile(optimizer='sgd', loss=tfr.keras.losses.PairwiseHingeLoss())

Definition:

\[ \mathcal{L}(\{y\}, \{s\}) = \sum_i \sum_j I[y_i > y_j] \max(0, 1 - (s_i - s_j)) \]

reduction (Optional) The tf.keras.losses.Reduction to use (see tf.keras.losses.Loss).
name (Optional) The name for the op.
lambda_weight (Optional) A lambdaweight to apply to the loss. Can be one of tfr.keras.losses.DCGLambdaWeight, tfr.keras.losses.NDCGLambdaWeight, or, tfr.keras.losses.PrecisionLambdaWeight.
temperature (Optional) The temperature to use for scaling the logits.
ragged (Optional) If True, this loss will accept ragged tensors. If False, this loss will accept dense tensors.

Methods

from_config

View source

Instantiates a Loss from its config (output of get_config()).

Args
config Output of get_config().

Returns
A Loss instance.

get_config

View source

Returns the config dictionary for a Loss instance.

__call__

View source

See tf.keras.losses.Loss.