tf.losses.compute_weighted_loss
Stay organized with collections
Save and categorize content based on your preferences.
Computes the weighted loss.
tf.losses.compute_weighted_loss(
losses, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES,
reduction=Reduction.SUM_BY_NONZERO_WEIGHTS
)
Args |
losses
|
Tensor of shape [batch_size, d1, ... dN] .
|
weights
|
Optional Tensor whose rank is either 0, or the same rank as
losses , and must be broadcastable to losses (i.e., all dimensions must
be either 1 , or the same as the corresponding losses dimension).
|
scope
|
the scope for the operations performed in computing the loss.
|
loss_collection
|
the loss will be added to these collections.
|
reduction
|
Type of reduction to apply to loss.
|
Returns |
Weighted loss Tensor of the same type as losses . If reduction is
NONE , this has the same shape as losses ; otherwise, it is scalar.
|
Raises |
ValueError
|
If weights is None or the shape is not compatible with
losses , or if the number of dimensions (rank) of either losses or
weights is missing.
|
Note:
When calculating the gradient of a weighted loss contributions from
both losses
and weights
are considered. If your weights
depend
on some model parameters but you do not want this to affect the loss
gradient, you need to apply tf.stop_gradient
to weights
before
passing them to compute_weighted_loss
.
Eager Compatibility
The loss_collection
argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a tf.keras.Model
.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.losses.compute_weighted_loss\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/losses/losses_impl.py#L138-L203) |\n\nComputes the weighted loss.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.losses.compute_weighted_loss`](/api_docs/python/tf/compat/v1/losses/compute_weighted_loss)\n\n\u003cbr /\u003e\n\n tf.losses.compute_weighted_loss(\n losses, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES,\n reduction=Reduction.SUM_BY_NONZERO_WEIGHTS\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `losses` | `Tensor` of shape `[batch_size, d1, ... dN]`. |\n| `weights` | Optional `Tensor` whose rank is either 0, or the same rank as `losses`, and must be broadcastable to `losses` (i.e., all dimensions must be either `1`, or the same as the corresponding `losses` dimension). |\n| `scope` | the scope for the operations performed in computing the loss. |\n| `loss_collection` | the loss will be added to these collections. |\n| `reduction` | Type of reduction to apply to loss. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If `weights` is `None` or the shape is not compatible with `losses`, or if the number of dimensions (rank) of either `losses` or `weights` is missing. |\n\n\u003cbr /\u003e\n\n#### Note:\n\nWhen calculating the gradient of a weighted loss contributions from\nboth `losses` and `weights` are considered. If your `weights` depend\non some model parameters but you do not want this to affect the loss\ngradient, you need to apply [`tf.stop_gradient`](../../tf/stop_gradient) to `weights` before\npassing them to `compute_weighted_loss`.\n\n#### Eager Compatibility\n\nThe `loss_collection` argument is ignored when executing eagerly. Consider\nholding on to the return value or collecting losses via a [`tf.keras.Model`](../../tf/keras/Model)."]]