A bool Tensor. This tensor controls whether the input is
added to the queue or not. If it is a scalar and evaluates True, then
tensors are all added to the queue. If it is a vector and enqueue_many
is True, then each example is added to the queue only if the
corresponding value in keep_input is True. This tensor essentially
acts as a filtering mechanism.
batch_size
The new batch size pulled from the queue.
num_threads
The number of threads enqueuing tensors. The batching will
be nondeterministic if num_threads > 1.
capacity
An integer. The maximum number of elements in the queue.
enqueue_many
Whether each tensor in tensors is a single example.
shapes
(Optional) The shapes for each example. Defaults to the
inferred shapes for tensors.
dynamic_pad
Boolean. Allow variable dimensions in input shapes.
The given dimensions are padded upon dequeue so that tensors within a
batch have the same shapes.
allow_smaller_final_batch
(Optional) Boolean. If True, allow the final
batch to be smaller if there are insufficient items left in the queue.
shared_name
(Optional). If set, this queue will be shared under the given
name across multiple sessions.
name
(Optional) A name for the operations.
Returns
A list or dictionary of tensors with the same types as tensors.
Raises
ValueError
If the shapes are not specified, and cannot be
inferred from the elements of tensors.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.train.maybe_batch\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/training/input.py#L1023-L1077) |\n\nConditionally creates batches of tensors based on `keep_input`. (deprecated)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.train.maybe_batch`](/api_docs/python/tf/compat/v1/train/maybe_batch)\n\n\u003cbr /\u003e\n\n tf.train.maybe_batch(\n tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False,\n shapes=None, dynamic_pad=False, allow_smaller_final_batch=False,\n shared_name=None, name=None\n )\n\n| **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by [`tf.data`](../../tf/data). Use `tf.data.Dataset.filter(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n\nSee docstring in `batch` for more details.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `tensors` | The list or dictionary of tensors to enqueue. |\n| `keep_input` | A `bool` Tensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates `True`, then `tensors` are all added to the queue. If it is a vector and `enqueue_many` is `True`, then each example is added to the queue only if the corresponding value in `keep_input` is `True`. This tensor essentially acts as a filtering mechanism. |\n| `batch_size` | The new batch size pulled from the queue. |\n| `num_threads` | The number of threads enqueuing `tensors`. The batching will be nondeterministic if `num_threads \u003e 1`. |\n| `capacity` | An integer. The maximum number of elements in the queue. |\n| `enqueue_many` | Whether each tensor in `tensors` is a single example. |\n| `shapes` | (Optional) The shapes for each example. Defaults to the inferred shapes for `tensors`. |\n| `dynamic_pad` | Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes. |\n| `allow_smaller_final_batch` | (Optional) Boolean. If `True`, allow the final batch to be smaller if there are insufficient items left in the queue. |\n| `shared_name` | (Optional). If set, this queue will be shared under the given name across multiple sessions. |\n| `name` | (Optional) A name for the operations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A list or dictionary of tensors with the same types as `tensors`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------------------------------------|\n| `ValueError` | If the `shapes` are not specified, and cannot be inferred from the elements of `tensors`. |\n\n\u003cbr /\u003e"]]