tf.contrib.training.rejection_sample
Stay organized with collections
Save and categorize content based on your preferences.
Stochastically creates batches by rejection sampling.
tf.contrib.training.rejection_sample(
tensors, accept_prob_fn, batch_size, queue_threads=1, enqueue_many=False,
prebatch_capacity=16, prebatch_threads=1, runtime_checks=False, name=None
)
Each list of non-batched tensors is evaluated by accept_prob_fn
, to produce
a scalar tensor between 0 and 1. This tensor corresponds to the probability of
being accepted. When batch_size
tensor groups have been accepted, the batch
queue will return a mini-batch.
Args |
tensors
|
List of tensors for data. All tensors are either one item or a
batch, according to enqueue_many.
|
accept_prob_fn
|
A python lambda that takes a non-batch tensor from each
item in tensors , and produces a scalar tensor.
|
batch_size
|
Size of batch to be returned.
|
queue_threads
|
The number of threads for the queue that will hold the final
batch.
|
enqueue_many
|
Bool. If true, interpret input tensors as having a batch
dimension.
|
prebatch_capacity
|
Capacity for the large queue that is used to convert
batched tensors to single examples.
|
prebatch_threads
|
Number of threads for the large queue that is used to
convert batched tensors to single examples.
|
runtime_checks
|
Bool. If true, insert runtime checks on the output of
accept_prob_fn . Using True might have a performance impact.
|
name
|
Optional prefix for ops created by this function.
|
Raises |
ValueError
|
enqueue_many is True and labels doesn't have a batch
dimension, or if enqueue_many is False and labels isn't a scalar.
|
ValueError
|
enqueue_many is True, and batch dimension on data and labels
don't match.
|
ValueError
|
if a zero initial probability class has a nonzero target
probability.
|
Returns |
A list of tensors of the same length as tensors , with batch dimension
batch_size .
|
Example:
Get tensor for a single data and label example.
data, label = data_provider.Get(['data', 'label'])
Get stratified batch according to data tensor.
accept_prob_fn = lambda x: (tf.tanh(x[0]) + 1) / 2
data_batch = tf.contrib.training.rejection_sample(
[data, label], accept_prob_fn, 16)
Run batch through network.
...
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.training.rejection_sample\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/training/python/training/sampling_ops.py#L38-L132) |\n\nStochastically creates batches by rejection sampling. \n\n tf.contrib.training.rejection_sample(\n tensors, accept_prob_fn, batch_size, queue_threads=1, enqueue_many=False,\n prebatch_capacity=16, prebatch_threads=1, runtime_checks=False, name=None\n )\n\nEach list of non-batched tensors is evaluated by `accept_prob_fn`, to produce\na scalar tensor between 0 and 1. This tensor corresponds to the probability of\nbeing accepted. When `batch_size` tensor groups have been accepted, the batch\nqueue will return a mini-batch.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------|-----------------------------------------------------------------------------------------------------------------------|\n| `tensors` | List of tensors for data. All tensors are either one item or a batch, according to enqueue_many. |\n| `accept_prob_fn` | A python lambda that takes a non-batch tensor from each item in `tensors`, and produces a scalar tensor. |\n| `batch_size` | Size of batch to be returned. |\n| `queue_threads` | The number of threads for the queue that will hold the final batch. |\n| `enqueue_many` | Bool. If true, interpret input tensors as having a batch dimension. |\n| `prebatch_capacity` | Capacity for the large queue that is used to convert batched tensors to single examples. |\n| `prebatch_threads` | Number of threads for the large queue that is used to convert batched tensors to single examples. |\n| `runtime_checks` | Bool. If true, insert runtime checks on the output of `accept_prob_fn`. Using `True` might have a performance impact. |\n| `name` | Optional prefix for ops created by this function. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|------------------------------------------------------------------------------------------------------------------------|\n| `ValueError` | enqueue_many is True and labels doesn't have a batch dimension, or if enqueue_many is False and labels isn't a scalar. |\n| `ValueError` | enqueue_many is True, and batch dimension on data and labels don't match. |\n| `ValueError` | if a zero initial probability class has a nonzero target probability. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A list of tensors of the same length as `tensors`, with batch dimension `batch_size`. ||\n\n\u003cbr /\u003e\n\n#### Example:\n\nGet tensor for a single data and label example.\n===============================================\n\ndata, label = data_provider.Get(\\['data', 'label'\\])\n\nGet stratified batch according to data tensor.\n==============================================\n\naccept_prob_fn = lambda x: (tf.tanh(x\\[0\\]) + 1) / 2\ndata_batch = tf.contrib.training.rejection_sample(\n\\[data, label\\], accept_prob_fn, 16)\n\nRun batch through network.\n==========================\n\n..."]]