View source on GitHub
|
Computes the crossentropy loss between the labels and predictions.
Inherits From: Loss
tf.keras.losses.CategoricalCrossentropy(
from_logits=False,
label_smoothing=0.0,
axis=-1,
reduction=losses_utils.ReductionV2.AUTO,
name='categorical_crossentropy'
)
Use this crossentropy loss function when there are two or more label
classes. We expect labels to be provided in a one_hot representation. If
you want to provide labels as integers, please use
SparseCategoricalCrossentropy loss. There should be # classes floating
point values per feature.
In the snippet below, there is # classes floating pointing values per
example. The shape of both y_pred and y_true are
[batch_size, num_classes].
Standalone usage:
y_true = [[0, 1, 0], [0, 0, 1]]y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]# Using 'auto'/'sum_over_batch_size' reduction type.cce = tf.keras.losses.CategoricalCrossentropy()cce(y_true, y_pred).numpy()1.177
# Calling with 'sample_weight'.cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()0.814
# Using 'sum' reduction type.cce = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.SUM)cce(y_true, y_pred).numpy()2.354
# Using 'none' reduction type.cce = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)cce(y_true, y_pred).numpy()array([0.0513, 2.303], dtype=float32)
Usage with the compile() API:
model.compile(optimizer='sgd',
loss=tf.keras.losses.CategoricalCrossentropy())
Args | |
|---|---|
from_logits
|
Whether y_pred is expected to be a logits tensor. By
default, we assume that y_pred encodes a probability
distribution.
|
label_smoothing
|
Float in [0, 1]. When > 0, label values are
smoothed, meaning the confidence on label values are relaxed.
For example, if 0.1, use 0.1 / num_classes for non-target
labels and 0.9 + 0.1 / num_classes for target labels.
|
axis
|
The axis along which to compute crossentropy (the features axis). Defaults to -1. |
reduction
|
Type of tf.keras.losses.Reduction to apply to loss.
Default value is AUTO. AUTO indicates that the reduction
option will be determined by the usage context. For almost all
cases this defaults to SUM_OVER_BATCH_SIZE. When used under a
tf.distribute.Strategy, except via Model.compile() and
Model.fit(), using AUTO or SUM_OVER_BATCH_SIZE
will raise an error. Please see this custom training tutorial
for more details.
|
name
|
Optional name for the instance. Defaults to 'categorical_crossentropy'. |
Methods
from_config
@classmethodfrom_config( config )
Instantiates a Loss from its config (output of get_config()).
| Args | |
|---|---|
config
|
Output of get_config().
|
| Returns | |
|---|---|
A keras.losses.Loss instance.
|
get_config
get_config()
Returns the config dictionary for a Loss instance.
__call__
__call__(
y_true, y_pred, sample_weight=None
)
Invokes the Loss instance.
| Args | |
|---|---|
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN],
except sparse loss functions such as sparse categorical
crossentropy where shape = [batch_size, d0, .. dN-1]
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN]
|
sample_weight
|
Optional sample_weight acts as a coefficient for
the loss. If a scalar is provided, then the loss is simply
scaled by the given value. If sample_weight is a tensor of
size [batch_size], then the total loss for each sample of the
batch is rescaled by the corresponding element in the
sample_weight vector. If the shape of sample_weight is
[batch_size, d0, .. dN-1] (or can be broadcasted to this
shape), then each loss element of y_pred is scaled by the
corresponding value of sample_weight. (Note ondN-1: all loss
functions reduce by 1 dimension, usually axis=-1.)
|
| Returns | |
|---|---|
Weighted loss float Tensor. If reduction is NONE, this has
shape [batch_size, d0, .. dN-1]; otherwise, it is scalar.
(Note dN-1 because all loss functions reduce by 1 dimension,
usually axis=-1.)
|
| Raises | |
|---|---|
ValueError
|
If the shape of sample_weight is invalid.
|
View source on GitHub