TensorFlow 1 version
|
View source on GitHub
|
Computes the cross-entropy loss between true labels and predicted labels.
Inherits From: Loss
tf.keras.losses.BinaryCrossentropy(
from_logits=False, label_smoothing=0, reduction=losses_utils.ReductionV2.AUTO,
name='binary_crossentropy'
)
Use this cross-entropy loss when there are only two label classes (assumed to be 0 and 1). For each example, there should be a single floating-point value per prediction.
In the snippet below, each of the four examples has only a single
floating-pointing value, and both y_pred and y_true have the shape
[batch_size].
Standalone usage:
y_true = [[0., 1.], [0., 0.]]y_pred = [[0.6, 0.4], [0.4, 0.6]]# Using 'auto'/'sum_over_batch_size' reduction type.bce = tf.keras.losses.BinaryCrossentropy()bce(y_true, y_pred).numpy()0.815
# Calling with 'sample_weight'.bce(y_true, y_pred, sample_weight=[1, 0]).numpy()0.458
# Using 'sum' reduction type.bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.SUM)bce(y_true, y_pred).numpy()1.630
# Using 'none' reduction type.bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)bce(y_true, y_pred).numpy()array([0.916 , 0.714], dtype=float32)
Usage with the tf.keras API:
model.compile(optimizer='sgd', loss=tf.keras.losses.BinaryCrossentropy())
Args | |
|---|---|
from_logits
|
Whether to interpret y_pred as a tensor of
logit values. By default, we
assume that y_pred contains probabilities (i.e., values in [0, 1]).
**Note - Using from_logits=True may be more numerically stable.
|
label_smoothing
|
Float in [0, 1]. When 0, no smoothing occurs. When > 0,
we compute the loss between the predicted labels and a smoothed version
of the true labels, where the smoothing squeezes the labels towards 0.5.
Larger values of label_smoothing correspond to heavier smoothing.
|
reduction
|
(Optional) Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO. AUTO indicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults to SUM_OVER_BATCH_SIZE. When used with
tf.distribute.Strategy, outside of built-in training loops such as
tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE
will raise an error. Please see this custom training tutorial
for more details.
|
name
|
(Optional) Name for the op. Defaults to 'binary_crossentropy'. |
Methods
from_config
@classmethodfrom_config( config )
Instantiates a Loss from its config (output of get_config()).
| Args | |
|---|---|
config
|
Output of get_config().
|
| Returns | |
|---|---|
A Loss instance.
|
get_config
get_config()
Returns the config dictionary for a Loss instance.
__call__
__call__(
y_true, y_pred, sample_weight=None
)
Invokes the Loss instance.
| Args | |
|---|---|
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN], except
sparse loss functions such as sparse categorical crossentropy where
shape = [batch_size, d0, .. dN-1]
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN]
|
sample_weight
|
Optional sample_weight acts as a
coefficient for the loss. If a scalar is provided, then the loss is
simply scaled by the given value. If sample_weight is a tensor of size
[batch_size], then the total loss for each sample of the batch is
rescaled by the corresponding element in the sample_weight vector. If
the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be
broadcasted to this shape), then each loss element of y_pred is scaled
by the corresponding value of sample_weight. (Note ondN-1: all loss
functions reduce by 1 dimension, usually axis=-1.)
|
| Returns | |
|---|---|
Weighted loss float Tensor. If reduction is NONE, this has
shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1
because all loss functions reduce by 1 dimension, usually axis=-1.)
|
| Raises | |
|---|---|
ValueError
|
If the shape of sample_weight is invalid.
|
TensorFlow 1 version
View source on GitHub