tf.keras.losses.MSLE
Stay organized with collections
Save and categorize content based on your preferences.
Computes the mean squared logarithmic error between y_true
and y_pred
.
View aliases
Main aliases
tf.keras.losses.mean_squared_logarithmic_error
, tf.keras.losses.msle
, tf.keras.metrics.MSLE
, tf.keras.metrics.mean_squared_logarithmic_error
, tf.keras.metrics.msle
, tf.losses.MSLE
, tf.losses.mean_squared_logarithmic_error
, tf.losses.msle
, tf.metrics.MSLE
, tf.metrics.mean_squared_logarithmic_error
, tf.metrics.msle
Compat aliases for migration
See
Migration guide for
more details.
tf.compat.v1.keras.losses.MSLE
, tf.compat.v1.keras.losses.mean_squared_logarithmic_error
, tf.compat.v1.keras.losses.msle
, tf.compat.v1.keras.metrics.MSLE
, tf.compat.v1.keras.metrics.mean_squared_logarithmic_error
, tf.compat.v1.keras.metrics.msle
tf.keras.losses.MSLE(
y_true, y_pred
)
loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)
Standalone usage:
y_true = np.random.randint(0, 2, size=(2, 3))
y_pred = np.random.random(size=(2, 3))
loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)
assert loss.shape == (2,)
y_true = np.maximum(y_true, 1e-7)
y_pred = np.maximum(y_pred, 1e-7)
assert np.array_equal(
loss.numpy(),
np.mean(
np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))
Args |
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN] .
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN] .
|
Returns |
Mean squared logarithmic error values. shape = [batch_size, d0, .. dN-1] .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.losses.MSLE\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/losses/MSLE) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/keras/losses.py#L1268-L1304) |\n\nComputes the mean squared logarithmic error between `y_true` and `y_pred`.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.keras.losses.mean_squared_logarithmic_error`](/api_docs/python/tf/keras/losses/MSLE), [`tf.keras.losses.msle`](/api_docs/python/tf/keras/losses/MSLE), [`tf.keras.metrics.MSLE`](/api_docs/python/tf/keras/losses/MSLE), [`tf.keras.metrics.mean_squared_logarithmic_error`](/api_docs/python/tf/keras/losses/MSLE), [`tf.keras.metrics.msle`](/api_docs/python/tf/keras/losses/MSLE), [`tf.losses.MSLE`](/api_docs/python/tf/keras/losses/MSLE), [`tf.losses.mean_squared_logarithmic_error`](/api_docs/python/tf/keras/losses/MSLE), [`tf.losses.msle`](/api_docs/python/tf/keras/losses/MSLE), [`tf.metrics.MSLE`](/api_docs/python/tf/keras/losses/MSLE), [`tf.metrics.mean_squared_logarithmic_error`](/api_docs/python/tf/keras/losses/MSLE), [`tf.metrics.msle`](/api_docs/python/tf/keras/losses/MSLE)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.losses.MSLE`](/api_docs/python/tf/keras/losses/MSLE), [`tf.compat.v1.keras.losses.mean_squared_logarithmic_error`](/api_docs/python/tf/keras/losses/MSLE), [`tf.compat.v1.keras.losses.msle`](/api_docs/python/tf/keras/losses/MSLE), [`tf.compat.v1.keras.metrics.MSLE`](/api_docs/python/tf/keras/losses/MSLE), [`tf.compat.v1.keras.metrics.mean_squared_logarithmic_error`](/api_docs/python/tf/keras/losses/MSLE), [`tf.compat.v1.keras.metrics.msle`](/api_docs/python/tf/keras/losses/MSLE)\n\n\u003cbr /\u003e\n\n tf.keras.losses.MSLE(\n y_true, y_pred\n )\n\n`loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)`\n\n#### Standalone usage:\n\n y_true = np.random.randint(0, 2, size=(2, 3))\n y_pred = np.random.random(size=(2, 3))\n loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)\n assert loss.shape == (2,)\n y_true = np.maximum(y_true, 1e-7)\n y_pred = np.maximum(y_pred, 1e-7)\n assert np.array_equal(\n loss.numpy(),\n np.mean(\n np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------|----------------------------------------------------------|\n| `y_true` | Ground truth values. shape = `[batch_size, d0, .. dN]`. |\n| `y_pred` | The predicted values. shape = `[batch_size, d0, .. dN]`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Mean squared logarithmic error values. shape = `[batch_size, d0, .. dN-1]`. ||\n\n\u003cbr /\u003e"]]