tf.keras.losses.logcosh
Stay organized with collections
Save and categorize content based on your preferences.
Logarithm of the hyperbolic cosine of the prediction error.
tf.keras.losses.logcosh(
y_true, y_pred
)
log(cosh(x))
is approximately equal to (x ** 2) / 2
for small x
and
to abs(x) - log(2)
for large x
. This means that 'logcosh' works mostly
like the mean squared error, but will not be so strongly affected by the
occasional wildly incorrect prediction.
Arguments |
y_true
|
tensor of true targets.
|
y_pred
|
tensor of predicted targets.
|
Returns |
Tensor with one scalar loss entry per sample.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.losses.logcosh\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/losses/logcosh) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/keras/losses.py#L918-L940) |\n\nLogarithm of the hyperbolic cosine of the prediction error.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.losses.logcosh`](/api_docs/python/tf/keras/losses/log_cosh)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.losses.logcosh`](/api_docs/python/tf/keras/losses/log_cosh)\n\n\u003cbr /\u003e\n\n tf.keras.losses.logcosh(\n y_true, y_pred\n )\n\n`log(cosh(x))` is approximately equal to `(x ** 2) / 2` for small `x` and\nto `abs(x) - log(2)` for large `x`. This means that 'logcosh' works mostly\nlike the mean squared error, but will not be so strongly affected by the\noccasional wildly incorrect prediction.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|----------|------------------------------|\n| `y_true` | tensor of true targets. |\n| `y_pred` | tensor of predicted targets. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Tensor with one scalar loss entry per sample. ||\n\n\u003cbr /\u003e"]]