Missed TensorFlow World? Check out the recap. Learn more

tf.keras.losses.LogCosh

TensorFlow 2.0 version View source on GitHub

Class LogCosh

Computes the logarithm of the hyperbolic cosine of the prediction error.

Aliases:

  • Class tf.compat.v1.keras.losses.LogCosh
  • Class tf.compat.v2.keras.losses.LogCosh
  • Class tf.compat.v2.losses.LogCosh

logcosh = log((exp(x) + exp(-x))/2), where x is the error (y_pred - y_true)

Usage:

l = tf.keras.losses.LogCosh()
loss = l([0., 1., 1.], [1., 0., 1.])
print('Loss: ', loss.numpy())  # Loss: 0.289

Usage with tf.keras API:

model = tf.keras.Model(inputs, outputs)
model.compile('sgd', loss=tf.keras.losses.LogCosh())

__init__

View source

__init__(
    reduction=losses_utils.ReductionV2.AUTO,
    name='logcosh'
)

Methods

__call__

View source

__call__(
    y_true,
    y_pred,
    sample_weight=None
)

Invokes the Loss instance.

Args:

  • y_true: Ground truth values.
  • y_pred: The predicted values.
  • sample_weight: Optional Tensor whose rank is either 0, or the same rank as y_true, or is broadcastable to y_true. sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight matches the shape of y_pred, then the loss of each measurable element of y_pred is scaled by the corresponding value of sample_weight.

Returns:

Weighted loss float Tensor. If reduction is NONE, this has the same shape as y_true; otherwise, it is scalar.

Raises:

  • ValueError: If the shape of sample_weight is invalid.

from_config

View source

from_config(
    cls,
    config
)

Instantiates a Loss from its config (output of get_config()).

Args:

  • config: Output of get_config().

Returns:

A Loss instance.

get_config

View source

get_config()