tfr.keras.losses.SoftmaxLoss
Stay organized with collections
Save and categorize content based on your preferences.
Computes Softmax cross-entropy loss between y_true
and y_pred
.
tfr.keras.losses.SoftmaxLoss(
reduction: tf.losses.Reduction = tf.losses.Reduction.AUTO,
name: Optional[str] = None,
lambda_weight: Optional[losses_impl._LambdaWeight] = None,
temperature: float = 1.0,
ragged: bool = False
)
For each list of scores s
in y_pred
and list of labels y
in y_true
:
loss = - sum_i y_i * log(softmax(s_i))
Standalone usage:
y_true = [[1., 0.]]
y_pred = [[0.6, 0.8]]
loss = tfr.keras.losses.SoftmaxLoss()
loss(y_true, y_pred).numpy()
0.7981389
# Using ragged tensors
y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])
y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])
loss = tfr.keras.losses.SoftmaxLoss(ragged=True)
loss(y_true, y_pred).numpy()
0.83911896
Usage with the compile()
API:
model.compile(optimizer='sgd', loss=tfr.keras.losses.SoftmaxLoss())
Definition:
\[
\mathcal{L}(\{y\}, \{s\}) = - \sum_i y_i
\log\left(\frac{\exp(s_i)}{\sum_j \exp(s_j)}\right)
\]
Methods
from_config
View source
@classmethod
from_config(
config, custom_objects=None
)
Instantiates a Loss
from its config (output of get_config()
).
Args |
config
|
Output of get_config() .
|
get_config
View source
get_config() -> Dict[str, Any]
Returns the config dictionary for a Loss
instance.
__call__
View source
__call__(
y_true: tfr.keras.model.TensorLike
,
y_pred: tfr.keras.model.TensorLike
,
sample_weight: Optional[utils.TensorLike] = None
) -> tf.Tensor
See _RankingLoss.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-08-18 UTC.
[null,null,["Last updated 2023-08-18 UTC."],[],[],null,["# tfr.keras.losses.SoftmaxLoss\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L755-L829) |\n\nComputes Softmax cross-entropy loss between `y_true` and `y_pred`. \n\n tfr.keras.losses.SoftmaxLoss(\n reduction: tf.losses.Reduction = tf.losses.Reduction.AUTO,\n name: Optional[str] = None,\n lambda_weight: Optional[losses_impl._LambdaWeight] = None,\n temperature: float = 1.0,\n ragged: bool = False\n )\n\nFor each list of scores `s` in `y_pred` and list of labels `y` in `y_true`: \n\n loss = - sum_i y_i * log(softmax(s_i))\n\n#### Standalone usage:\n\n y_true = [[1., 0.]]\n y_pred = [[0.6, 0.8]]\n loss = tfr.keras.losses.SoftmaxLoss()\n loss(y_true, y_pred).numpy()\n 0.7981389\n\n # Using ragged tensors\n y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])\n y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])\n loss = tfr.keras.losses.SoftmaxLoss(ragged=True)\n loss(y_true, y_pred).numpy()\n 0.83911896\n\nUsage with the `compile()` API: \n\n model.compile(optimizer='sgd', loss=tfr.keras.losses.SoftmaxLoss())\n\n#### Definition:\n\n\\\\\\[\n\\\\mathcal{L}(\\\\{y\\\\}, \\\\{s\\\\}) = - \\\\sum_i y_i\n\\\\log\\\\left(\\\\frac{\\\\exp(s_i)}{\\\\sum_j \\\\exp(s_j)}\\\\right)\n\\\\\\]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `reduction` | (Optional) The [`tf.keras.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) to use (see [`tf.keras.losses.Loss`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Loss)). |\n| `name` | (Optional) The name for the op. |\n| `lambda_weight` | (Optional) A lambdaweight to apply to the loss. Can be one of [`tfr.keras.losses.DCGLambdaWeight`](../../../tfr/keras/losses/DCGLambdaWeight), [`tfr.keras.losses.NDCGLambdaWeight`](../../../tfr/keras/losses/NDCGLambdaWeight), or, [`tfr.keras.losses.PrecisionLambdaWeight`](../../../tfr/keras/losses/PrecisionLambdaWeight). |\n| `temperature` | (Optional) The temperature to use for scaling the logits. |\n| `ragged` | (Optional) If True, this loss will accept ragged tensors. If False, this loss will accept dense tensors. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L742-L752) \n\n @classmethod\n from_config(\n config, custom_objects=None\n )\n\nInstantiates a `Loss` from its config (output of `get_config()`).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|---------------------------|\n| `config` | Output of `get_config()`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A `Loss` instance. ||\n\n\u003cbr /\u003e\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L732-L740) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns the config dictionary for a `Loss` instance.\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L821-L829) \n\n __call__(\n y_true: ../../../tfr/keras/model/TensorLike,\n y_pred: ../../../tfr/keras/model/TensorLike,\n sample_weight: Optional[utils.TensorLike] = None\n ) -\u003e tf.Tensor\n\nSee _RankingLoss."]]