tf.losses.mean_squared_error
Stay organized with collections
Save and categorize content based on your preferences.
Adds a Sum-of-Squares loss to the training procedure.
tf.losses.mean_squared_error(
labels, predictions, weights=1.0, scope=None,
loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)
weights
acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If weights
is a tensor of size
[batch_size]
, then the total loss for each sample of the batch is rescaled
by the corresponding element in the weights
vector. If the shape of
weights
matches the shape of predictions
, then the loss of each
measurable element of predictions
is scaled by the corresponding value of
weights
.
Args |
labels
|
The ground truth output tensor, same dimensions as 'predictions'.
|
predictions
|
The predicted outputs.
|
weights
|
Optional Tensor whose rank is either 0, or the same rank as
labels , and must be broadcastable to labels (i.e., all dimensions must
be either 1 , or the same as the corresponding losses dimension).
|
scope
|
The scope for the operations performed in computing the loss.
|
loss_collection
|
collection to which the loss will be added.
|
reduction
|
Type of reduction to apply to loss.
|
Returns |
Weighted loss float Tensor . If reduction is NONE , this has the same
shape as labels ; otherwise, it is scalar.
|
Raises |
ValueError
|
If the shape of predictions doesn't match that of labels or
if the shape of weights is invalid. Also if labels or predictions
is None.
|
Eager Compatibility
The loss_collection
argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a tf.keras.Model
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.losses.mean_squared_error\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/losses/losses_impl.py#L596-L646) |\n\nAdds a Sum-of-Squares loss to the training procedure.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.losses.mean_squared_error`](/api_docs/python/tf/compat/v1/losses/mean_squared_error)\n\n\u003cbr /\u003e\n\n tf.losses.mean_squared_error(\n labels, predictions, weights=1.0, scope=None,\n loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS\n )\n\n`weights` acts as a coefficient for the loss. If a scalar is provided, then\nthe loss is simply scaled by the given value. If `weights` is a tensor of size\n`[batch_size]`, then the total loss for each sample of the batch is rescaled\nby the corresponding element in the `weights` vector. If the shape of\n`weights` matches the shape of `predictions`, then the loss of each\nmeasurable element of `predictions` is scaled by the corresponding value of\n`weights`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `labels` | The ground truth output tensor, same dimensions as 'predictions'. |\n| `predictions` | The predicted outputs. |\n| `weights` | Optional `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension). |\n| `scope` | The scope for the operations performed in computing the loss. |\n| `loss_collection` | collection to which the loss will be added. |\n| `reduction` | Type of reduction to apply to loss. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same shape as `labels`; otherwise, it is scalar. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If the shape of `predictions` doesn't match that of `labels` or if the shape of `weights` is invalid. Also if `labels` or `predictions` is None. |\n\n\u003cbr /\u003e\n\n#### Eager Compatibility\n\nThe `loss_collection` argument is ignored when executing eagerly. Consider\nholding on to the return value or collecting losses via a [`tf.keras.Model`](../../tf/keras/Model)."]]