A Tensor of shape [num_classes, dim], or a list of Tensor
objects whose concatenation along dimension 0 has shape [num_classes,
dim]. The (possibly-sharded) class embeddings.
biases
A Tensor of shape [num_classes]. The class biases.
labels
A Tensor of type int64 and shape [batch_size, num_true]. The
target classes. Note that this format differs from the labels argument
of nn.softmax_cross_entropy_with_logits.
inputs
A Tensor of shape [batch_size, dim]. The forward activations of
the input network.
num_sampled
An int. The number of classes to randomly sample per batch.
num_classes
An int. The number of possible classes.
num_true
An int. The number of target classes per training example.
sampled_values
a tuple of (sampled_candidates, true_expected_count,
sampled_expected_count) returned by a *_candidate_sampler function.
(if None, we default to log_uniform_candidate_sampler)
remove_accidental_hits
A bool. whether to remove "accidental hits"
where a sampled class equals one of the target classes. Default is True.
seed
random seed for candidate sampling. Default to None, which doesn't set
the op-level random seed for candidate sampling.
name
A name for the operation (optional).
Returns
A batch_size 1-D tensor of per-example sampled softmax losses.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.nn.sampled_softmax_loss\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.13.1/tensorflow/python/ops/nn_impl.py#L2228-L2317) |\n\nComputes and returns the sampled softmax training loss. \n\n tf.nn.sampled_softmax_loss(\n weights,\n biases,\n labels,\n inputs,\n num_sampled,\n num_classes,\n num_true=1,\n sampled_values=None,\n remove_accidental_hits=True,\n seed=None,\n name='sampled_softmax_loss'\n )\n\nThis is a faster way to train a softmax classifier over a huge number of\nclasses.\n\nThis operation is for training only. It is generally an underestimate of\nthe full softmax loss.\n\nA common use case is to use this method for training, and calculate the full\nsoftmax loss for evaluation or inference as in the following example: \n\n if mode == \"train\":\n loss = tf.nn.sampled_softmax_loss(\n weights=weights,\n biases=biases,\n labels=labels,\n inputs=inputs,\n ...)\n elif mode == \"eval\":\n logits = tf.matmul(inputs, tf.transpose(weights))\n logits = tf.nn.bias_add(logits, biases)\n labels_one_hot = tf.one_hot(labels, n_classes)\n loss = tf.nn.softmax_cross_entropy_with_logits(\n labels=labels_one_hot,\n logits=logits)\n\nSee our [Candidate Sampling Algorithms Reference](https://www.tensorflow.org/extras/candidate_sampling.pdf)\n\nAlso see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007)\n([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.\n| **Note:** when doing embedding lookup on `weights` and `bias`, \"div\" partition strategy will be used. Support for other partition strategy will be added later.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `weights` | A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor` objects whose concatenation along dimension 0 has shape \\[num_classes, dim\\]. The (possibly-sharded) class embeddings. |\n| `biases` | A `Tensor` of shape `[num_classes]`. The class biases. |\n| `labels` | A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of [`nn.softmax_cross_entropy_with_logits`](../../tf/nn/softmax_cross_entropy_with_logits). |\n| `inputs` | A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network. |\n| `num_sampled` | An `int`. The number of classes to randomly sample per batch. |\n| `num_classes` | An `int`. The number of possible classes. |\n| `num_true` | An `int`. The number of target classes per training example. |\n| `sampled_values` | a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`) |\n| `remove_accidental_hits` | A `bool`. whether to remove \"accidental hits\" where a sampled class equals one of the target classes. Default is True. |\n| `seed` | random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `batch_size` 1-D tensor of per-example sampled softmax losses. ||\n\n\u003cbr /\u003e"]]