tf.keras.losses.SparseCategoricalCrossentropy
Stay organized with collections
Save and categorize content based on your preferences.
Computes the crossentropy loss between the labels and predictions.
Inherits From: Loss
tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False,
ignore_class=None,
reduction='sum_over_batch_size',
name='sparse_categorical_crossentropy'
)
Used in the notebooks
Used in the guide |
Used in the tutorials |
|
|
Use this crossentropy loss function when there are two or more label
classes. We expect labels to be provided as integers. If you want to
provide labels using one-hot
representation, please use
CategoricalCrossentropy
loss. There should be # classes
floating point
values per feature for y_pred
and a single floating point value per
feature for y_true
.
In the snippet below, there is a single floating point value per example for
y_true
and num_classes
floating pointing values per example for
y_pred
. The shape of y_true
is [batch_size]
and the shape of y_pred
is [batch_size, num_classes]
.
Args |
from_logits
|
Whether y_pred is expected to be a logits tensor. By
default, we assume that y_pred encodes a probability distribution.
|
reduction
|
Type of reduction to apply to the loss. In almost all cases
this should be "sum_over_batch_size" .
Supported options are "sum" , "sum_over_batch_size" or None .
|
name
|
Optional name for the loss instance.
|
Examples:
y_true = [1, 2]
y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
# Using 'auto'/'sum_over_batch_size' reduction type.
scce = keras.losses.SparseCategoricalCrossentropy()
scce(y_true, y_pred)
1.177
# Calling with 'sample_weight'.
scce(y_true, y_pred, sample_weight=np.array([0.3, 0.7]))
0.814
# Using 'sum' reduction type.
scce = keras.losses.SparseCategoricalCrossentropy(
reduction="sum")
scce(y_true, y_pred)
2.354
# Using 'none' reduction type.
scce = keras.losses.SparseCategoricalCrossentropy(
reduction=None)
scce(y_true, y_pred)
array([0.0513, 2.303], dtype=float32)
Usage with the compile()
API:
model.compile(optimizer='sgd',
loss=keras.losses.SparseCategoricalCrossentropy())
Methods
call
View source
call(
y_true, y_pred
)
from_config
View source
@classmethod
from_config(
config
)
get_config
View source
get_config()
__call__
View source
__call__(
y_true, y_pred, sample_weight=None
)
Call self as a function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[null,null,["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.losses.SparseCategoricalCrossentropy\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/losses/losses.py#L903-L983) |\n\nComputes the crossentropy loss between the labels and predictions.\n\nInherits From: [`Loss`](../../../tf/keras/Loss) \n\n tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=False,\n ignore_class=None,\n reduction='sum_over_batch_size',\n name='sparse_categorical_crossentropy'\n )\n\n### Used in the notebooks\n\n| Used in the guide | Used in the tutorials |\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Effective Tensorflow 2](https://www.tensorflow.org/guide/effective_tf2) - [Migrate early stopping](https://www.tensorflow.org/guide/migrate/early_stopping) - [Migrate the fault tolerance mechanism](https://www.tensorflow.org/guide/migrate/fault_tolerance) - [Use TPUs](https://www.tensorflow.org/guide/tpu) - [tf.data: Build TensorFlow input pipelines](https://www.tensorflow.org/guide/data) | - [Load text](https://www.tensorflow.org/tutorials/load_data/text) - [Distributed training with Keras](https://www.tensorflow.org/tutorials/distribute/keras) - [Image segmentation](https://www.tensorflow.org/tutorials/images/segmentation) - [Save and load a model using a distribution strategy](https://www.tensorflow.org/tutorials/distribute/save_and_load) - [Image classification](https://www.tensorflow.org/tutorials/images/classification) |\n\nUse this crossentropy loss function when there are two or more label\nclasses. We expect labels to be provided as integers. If you want to\nprovide labels using `one-hot` representation, please use\n`CategoricalCrossentropy` loss. There should be `# classes` floating point\nvalues per feature for `y_pred` and a single floating point value per\nfeature for `y_true`.\n\nIn the snippet below, there is a single floating point value per example for\n`y_true` and `num_classes` floating pointing values per example for\n`y_pred`. The shape of `y_true` is `[batch_size]` and the shape of `y_pred`\nis `[batch_size, num_classes]`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `from_logits` | Whether `y_pred` is expected to be a logits tensor. By default, we assume that `y_pred` encodes a probability distribution. |\n| `reduction` | Type of reduction to apply to the loss. In almost all cases this should be `\"sum_over_batch_size\"`. Supported options are `\"sum\"`, `\"sum_over_batch_size\"` or `None`. |\n| `name` | Optional name for the loss instance. |\n\n\u003cbr /\u003e\n\n#### Examples:\n\n y_true = [1, 2]\n y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]\n # Using 'auto'/'sum_over_batch_size' reduction type.\n scce = keras.losses.SparseCategoricalCrossentropy()\n scce(y_true, y_pred)\n 1.177\n\n # Calling with 'sample_weight'.\n scce(y_true, y_pred, sample_weight=np.array([0.3, 0.7]))\n 0.814\n\n # Using 'sum' reduction type.\n scce = keras.losses.SparseCategoricalCrossentropy(\n reduction=\"sum\")\n scce(y_true, y_pred)\n 2.354\n\n # Using 'none' reduction type.\n scce = keras.losses.SparseCategoricalCrossentropy(\n reduction=None)\n scce(y_true, y_pred)\n array([0.0513, 2.303], dtype=float32)\n\nUsage with the `compile()` API: \n\n model.compile(optimizer='sgd',\n loss=keras.losses.SparseCategoricalCrossentropy())\n\nMethods\n-------\n\n### `call`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/losses/losses.py#L20-L22) \n\n call(\n y_true, y_pred\n )\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/losses/losses.py#L30-L34) \n\n @classmethod\n from_config(\n config\n )\n\n### `get_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/losses/losses.py#L977-L983) \n\n get_config()\n\n### `__call__`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/losses/loss.py#L32-L61) \n\n __call__(\n y_true, y_pred, sample_weight=None\n )\n\nCall self as a function."]]