weights acts as a coefficient for the loss. If a scalar is provided,
then the loss is simply scaled by the given value. If weights is a
tensor of size [batch_size], then the loss weights apply to each
corresponding sample.
If label_smoothing is nonzero, smooth the labels towards 1/num_classes:
new_onehot_labels = onehot_labels * (1 - label_smoothing)
+label_smoothing/num_classes
Args
logits
[batch_size, num_classes] logits outputs of the network .
onehot_labels
[batch_size, num_classes] one-hot-encoded labels.
weights
Coefficients for the loss. The tensor must be a scalar or a tensor
of shape [batch_size].
label_smoothing
If greater than 0 then smooth the labels.
scope
the scope for the operations performed in computing the loss.
Returns
A scalar Tensor representing the mean loss value.
Raises
ValueError
If the shape of logits doesn't match that of onehot_labels
or if the shape of weights is invalid or if weights is None.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.losses.softmax_cross_entropy\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/losses/python/losses/loss_ops.py#L325-L374) |\n\nCreates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits. (deprecated) \n\n tf.contrib.losses.softmax_cross_entropy(\n logits, onehot_labels, weights=1.0, label_smoothing=0, scope=None\n )\n\n| **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30. Instructions for updating: Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.\n\n`weights` acts as a coefficient for the loss. If a scalar is provided,\nthen the loss is simply scaled by the given value. If `weights` is a\ntensor of size \\[`batch_size`\\], then the loss weights apply to each\ncorresponding sample.\n\nIf `label_smoothing` is nonzero, smooth the labels towards 1/num_classes:\nnew_onehot_labels = onehot_labels \\* (1 - label_smoothing) \n\n + label_smoothing / num_classes\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|---------------------------------------------------------------------------------------------|\n| `logits` | \\[batch_size, num_classes\\] logits outputs of the network . |\n| `onehot_labels` | \\[batch_size, num_classes\\] one-hot-encoded labels. |\n| `weights` | Coefficients for the loss. The tensor must be a scalar or a tensor of shape \\[batch_size\\]. |\n| `label_smoothing` | If greater than 0 then smooth the labels. |\n| `scope` | the scope for the operations performed in computing the loss. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A scalar `Tensor` representing the mean loss value. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | If the shape of `logits` doesn't match that of `onehot_labels` or if the shape of `weights` is invalid or if `weights` is None. |\n\n\u003cbr /\u003e"]]