View source on GitHub
|
Computes the alpha balanced focal crossentropy loss.
Inherits From: Loss
tf.keras.losses.CategoricalFocalCrossentropy(
alpha=0.25,
gamma=2.0,
from_logits=False,
label_smoothing=0.0,
axis=-1,
reduction=losses_utils.ReductionV2.AUTO,
name='categorical_focal_crossentropy'
)
Use this crossentropy loss function when there are two or more label
classes and if you want to handle class imbalance without using
class_weights. We expect labels to be provided in a one_hot
representation.
According to Lin et al., 2018, it helps to apply a focal factor to down-weight easy examples and focus more on hard examples. The general formula for the focal loss (FL) is as follows:
FL(p_t) = (1 − p_t)^gamma * log(p_t)
where p_t is defined as follows:
p_t = output if y_true == 1, else 1 - output
(1 − p_t)^gamma is the modulating_factor, where gamma is a focusing
parameter. When gamma = 0, there is no focal effect on the cross entropy.
gamma reduces the importance given to simple examples in a smooth manner.
The authors use alpha-balanced variant of focal loss (FL) in the paper:
FL(p_t) = −alpha * (1 − p_t)^gamma * log(p_t)
where alpha is the weight factor for the classes. If alpha = 1, the
loss won't be able to handle class imbalance properly as all
classes will have the same weight. This can be a constant or a list of
constants. If alpha is a list, it must have the same length as the number
of classes.
The formula above can be generalized to:
FL(p_t) = alpha * (1 − p_t)^gamma * CrossEntropy(y_true, y_pred)
where minus comes from CrossEntropy(y_true, y_pred) (CE).
Extending this to multi-class case is straightforward:
FL(p_t) = alpha * (1 − p_t)^gamma * CategoricalCE(y_true, y_pred)
In the snippet below, there is # classes floating pointing values per
example. The shape of both y_pred and y_true are
[batch_size, num_classes].
Standalone usage:
y_true = [[0., 1., 0.], [0., 0., 1.]]y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]# Using 'auto'/'sum_over_batch_size' reduction type.cce = tf.keras.losses.CategoricalFocalCrossentropy()cce(y_true, y_pred).numpy()0.23315276
# Calling with 'sample_weight'.cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()0.1632
# Using 'sum' reduction type.cce = tf.keras.losses.CategoricalFocalCrossentropy(reduction=tf.keras.losses.Reduction.SUM)cce(y_true, y_pred).numpy()0.46631
# Using 'none' reduction type.cce = tf.keras.losses.CategoricalFocalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)cce(y_true, y_pred).numpy()array([3.2058331e-05, 4.6627346e-01], dtype=float32)
Usage with the compile() API:
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalFocalCrossentropy())
Args | |
|---|---|
alpha
|
A weight balancing factor for all classes, default is 0.25 as
mentioned in the reference. It can be a list of floats or a scalar.
In the multi-class case, alpha may be set by inverse class
frequency by using compute_class_weight from sklearn.utils.
|
gamma
|
A focusing parameter, default is 2.0 as mentioned in the
reference. It helps to gradually reduce the importance given to
simple (easy) examples in a smooth manner.
|
from_logits
|
Whether output is expected to be a logits tensor. By
default, we consider that output encodes a probability
distribution.
|
label_smoothing
|
Float in [0, 1]. When > 0, label values are smoothed,
meaning the confidence on label values are relaxed. For example, if
0.1, use 0.1 / num_classes for non-target labels and
0.9 + 0.1 / num_classes for target labels.
|
axis
|
The axis along which to compute crossentropy (the features axis). Defaults to -1. |
reduction
|
Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO. AUTO indicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults to SUM_OVER_BATCH_SIZE. When used under a
tf.distribute.Strategy, except via Model.compile() and
Model.fit(), using AUTO or SUM_OVER_BATCH_SIZE
will raise an error. Please see this custom training tutorial
for more details.
|
name
|
Optional name for the instance. Defaults to 'categorical_focal_crossentropy'. |
Methods
from_config
@classmethodfrom_config( config )
Instantiates a Loss from its config (output of get_config()).
| Args | |
|---|---|
config
|
Output of get_config().
|
| Returns | |
|---|---|
A keras.losses.Loss instance.
|
get_config
get_config()
Returns the config dictionary for a Loss instance.
__call__
__call__(
y_true, y_pred, sample_weight=None
)
Invokes the Loss instance.
| Args | |
|---|---|
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN],
except sparse loss functions such as sparse categorical
crossentropy where shape = [batch_size, d0, .. dN-1]
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN]
|
sample_weight
|
Optional sample_weight acts as a coefficient for
the loss. If a scalar is provided, then the loss is simply
scaled by the given value. If sample_weight is a tensor of
size [batch_size], then the total loss for each sample of the
batch is rescaled by the corresponding element in the
sample_weight vector. If the shape of sample_weight is
[batch_size, d0, .. dN-1] (or can be broadcasted to this
shape), then each loss element of y_pred is scaled by the
corresponding value of sample_weight. (Note ondN-1: all loss
functions reduce by 1 dimension, usually axis=-1.)
|
| Returns | |
|---|---|
Weighted loss float Tensor. If reduction is NONE, this has
shape [batch_size, d0, .. dN-1]; otherwise, it is scalar.
(Note dN-1 because all loss functions reduce by 1 dimension,
usually axis=-1.)
|
| Raises | |
|---|---|
ValueError
|
If the shape of sample_weight is invalid.
|
View source on GitHub