# Using ragged tensorsy_true=tf.ragged.constant([[1.,0.],[0.,1.,0.]])y_pred=tf.ragged.constant([[0.6,0.8],[0.5,0.8,0.4]])loss=tfr.keras.losses.SigmoidCrossEntropyLoss(ragged=True)loss(y_true,y_pred).numpy()0.64446354
Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO. AUTO indicates that the
reduction option will be determined by the usage context. For
almost all cases this defaults to SUM_OVER_BATCH_SIZE. When
used under a tf.distribute.Strategy, except via
Model.compile() and Model.fit(), using AUTO or
SUM_OVER_BATCH_SIZE will raise an error. Please see this
custom training tutorial
for more details.
name
Optional name for the instance.
Methods
from_config
@classmethodfrom_config(config)
Instantiates a Loss from its config (output of get_config()).
[null,null,["Last updated 2023-10-20 UTC."],[],[],null,["# tfr.keras.losses.SigmoidCrossEntropyLoss\n\n|----------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L1385-L1432) |\n\nComputes the Sigmoid cross-entropy loss between `y_true` and `y_pred`. \n\n tfr.keras.losses.SigmoidCrossEntropyLoss(\n reduction: tf.losses.Reduction = tf.losses.Reduction.AUTO,\n name: Optional[str] = None,\n ragged: bool = False\n )\n\n loss = -(y_true log(sigmoid(y_pred)) + (1 - y_true) log(1 - sigmoid(y_pred)))\n\n| **Note:** This loss does not support graded relevance labels and should only be used with binary relevance labels (\\\\(y \\\\in \\[0, 1\\]\\\\)).\n\n#### Standalone usage:\n\n y_true = [[1., 0.]]\n y_pred = [[0.6, 0.8]]\n loss = tfr.keras.losses.SigmoidCrossEntropyLoss()\n loss(y_true, y_pred).numpy()\n 0.8042943\n\n # Using ragged tensors\n y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])\n y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])\n loss = tfr.keras.losses.SigmoidCrossEntropyLoss(ragged=True)\n loss(y_true, y_pred).numpy()\n 0.64446354\n\nUsage with the `compile()` API: \n\n model.compile(optimizer='sgd',\n loss=tfr.keras.losses.SigmoidCrossEntropyLoss())\n\n#### Definition:\n\n\\\\\\[\n\\\\mathcal{L}(\\\\{y\\\\}, \\\\{s\\\\}) = - \\\\sum_{i} y_i\n\\\\log(\\\\text{sigmoid}(s_i)) + (1 - y_i) \\\\log(1 - \\\\text{sigmoid}(s_i))\n\\\\\\]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `reduction` | Type of [`tf.keras.losses.Reduction`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction) to apply to loss. Default value is `AUTO`. `AUTO` indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When used under a [`tf.distribute.Strategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy), except via [`Model.compile()`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile) and [`Model.fit()`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit), using `AUTO` or `SUM_OVER_BATCH_SIZE` will raise an error. Please see this custom training [tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training) for more details. |\n| `name` | Optional name for the instance. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n @classmethod\n from_config(\n config\n )\n\nInstantiates a `Loss` from its config (output of `get_config()`).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|---------------------------|\n| `config` | Output of `get_config()`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A `Loss` instance. ||\n\n\u003cbr /\u003e\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L280-L283) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns the config dictionary for a `Loss` instance.\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/losses.py#L262-L270) \n\n __call__(\n y_true: ../../../tfr/keras/model/TensorLike,\n y_pred: ../../../tfr/keras/model/TensorLike,\n sample_weight: Optional[utils.TensorLike] = None\n ) -\u003e tf.Tensor\n\nSee tf.keras.losses.Loss."]]