|View source on GitHub|
Lazy bucketing of input tensors according to
tf.contrib.training.bucket( tensors, which_bucket, batch_size, num_buckets, num_threads=1, capacity=32, bucket_capacities=None, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None )
tensors can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
The tensors entering this function are put into the bucket given by
which_bucket. Each bucket has its own queue. When a bucket contains
batch_size elements, this minibatch is pushed onto a top queue. The
tensors returned from this function are a the result of dequeueing the
next minibatch from this top queue.
This function is implemented using several queues. A
QueueRunner for the
queues is added to the current
As the returned tensors are the result of a dequeue operation, evaluating
them will throw a
tf.errors.OutOfRangeError when the input queue is
exhausted. If these tensors are feeding another input queue, its queue runner
will catch this exception, however, if they are used in your main thread
you are responsible for catching this yourself.
True, it is sufficient that the rank of the
tensors is known, but individual dimensions may have shape
In this case, for each enqueue the dimensions with value
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See
PaddingFIFOQueue for more info.
True, a smaller batch value than
batch_size is returned when the queues are closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
get_shape() method will have a 0th
Dimension value of
operations that depend on fixed batch_size would fail.
||The list or dictionary of tensors, representing a single element, to bucket. Nested lists are not supported.|
||The new batch size pulled from the queue (all queues will have the same size). If a list is passed in then each bucket will have a different batch_size. (python int, int32 scalar or iterable of integers of length num_buckets).|
||A python integer, the number of buckets.|
An integer. The number of threads enqueuing
||An integer. The maximum number of minibatches in the top queue, and also (by default) the maximum number of elements within each bucket.|
||(Optional) None or a list of integers, the capacities of each bucket. If None, capacity is used (default). If specified, it must be a list of integers of length num_buckets: the i-th element is used as capacity for the i-th bucket queue.|
(Optional) The shapes for each example. Defaults to the
inferred shapes for
||Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.|
(Optional) Boolean. If
||(Optional). If set, the queues will be shared under the given name across multiple sessions.|
||(Optional) A name for the operations.|