tf.raw_ops.BatchFunction
Stay organized with collections
Save and categorize content based on your preferences.
Batches all the inputs tensors to the computation done by the function.
tf.raw_ops.BatchFunction(
in_tensors, captured_tensors, f, num_batch_threads, max_batch_size,
batch_timeout_micros, Tout, max_enqueued_batches=10, allowed_batch_sizes=[],
container='', shared_name='', batching_queue='', name=None
)
So, for example, in the following code
# This input will be captured.
y = tf.placeholder_with_default(1.0, shape=[])
@tf.Defun(tf.float32)
def computation(a):
return tf.matmul(a, a) + y
b = gen_batch_ops.batch_function(
f=computation
in_tensors=[a],
captured_tensors=computation.captured_inputs,
Tout=[o.type for o in computation.definition.signature.output_arg],
num_batch_threads=1,
max_batch_size=10,
batch_timeout_micros=100000, # 100ms
allowed_batch_sizes=[3, 10],
batching_queue="")
If more than one session.run call is simultaneously trying to compute `b`
the values of `a` will be gathered, non-deterministically concatenated
along the first axis, and only one thread will run the computation.
Assumes that all arguments of the function are Tensors which will be batched
along their first dimension.
Arguments that are captured, are not batched. The session.run call which does
the concatenation, will use the values of the captured tensors available to it.
Therefore, typical uses of captured tensors should involve values which remain
unchanged across session.run calls. Inference is a good example of this.
SparseTensor is not supported. The return value of the decorated function
must be a Tensor or a list/tuple of Tensors.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
<tr>
<td>
`in_tensors`
</td>
<td>
A list of `Tensor` objects. The tensors to be batched.
</td>
</tr><tr>
<td>
`captured_tensors`
</td>
<td>
A list of `Tensor` objects.
The tensors which are captured in the function, and don't need
to be batched.
</td>
</tr><tr>
<td>
`f`
</td>
<td>
A function decorated with @Defun.
</td>
</tr><tr>
<td>
`num_batch_threads`
</td>
<td>
An `int`.
Number of scheduling threads for processing batches of work.
Determines the number of batches processed in parallel.
</td>
</tr><tr>
<td>
`max_batch_size`
</td>
<td>
An `int`. Batch sizes will never be bigger than this.
</td>
</tr><tr>
<td>
`batch_timeout_micros`
</td>
<td>
An `int`.
Maximum number of microseconds to wait before outputting
an incomplete batch.
</td>
</tr><tr>
<td>
`Tout`
</td>
<td>
A list of `tf.DTypes` that has length `>= 1`.
the types of the output tensors.
</td>
</tr><tr>
<td>
`max_enqueued_batches`
</td>
<td>
An optional `int`. Defaults to `10`.
Maximum number of batches enqueued. Default: 10.
</td>
</tr><tr>
<td>
`allowed_batch_sizes`
</td>
<td>
An optional list of `ints`. Defaults to `[]`.
Optional list of allowed batch sizes. If left empty, does
nothing. Otherwise, supplies a list of batch sizes, causing the op to pad
batches up to one of those sizes. The entries must increase monotonically, and
the final entry must equal max_batch_size.
</td>
</tr><tr>
<td>
`container`
</td>
<td>
An optional `string`. Defaults to `""`.
Controls the scope of sharing of this batch.
</td>
</tr><tr>
<td>
`shared_name`
</td>
<td>
An optional `string`. Defaults to `""`.
Concurrently running instances of batch in the same device with the
same container and shared_name will batch their elements together. If left
empty, the op name will be used as the shared name.
</td>
</tr><tr>
<td>
`batching_queue`
</td>
<td>
An optional `string`. Defaults to `""`.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
A name for the operation (optional).
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Returns</h2></th></tr>
<tr class="alt">
<td colspan="2">
A list of `Tensor` objects of type `Tout`.
</td>
</tr>
</table>
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.raw_ops.BatchFunction\n\n\u003cbr /\u003e\n\nBatches all the inputs tensors to the computation done by the function.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.raw_ops.BatchFunction`](/api_docs/python/tf/raw_ops/BatchFunction)\n\n\u003cbr /\u003e\n\n tf.raw_ops.BatchFunction(\n in_tensors, captured_tensors, f, num_batch_threads, max_batch_size,\n batch_timeout_micros, Tout, max_enqueued_batches=10, allowed_batch_sizes=[],\n container='', shared_name='', batching_queue='', name=None\n )\n\nSo, for example, in the following code \n\n\n # This input will be captured.\n y = tf.placeholder_with_default(1.0, shape=[])\n\n @tf.Defun(tf.float32)\n def computation(a):\n return tf.matmul(a, a) + y\n\n b = gen_batch_ops.batch_function(\n f=computation\n in_tensors=[a],\n captured_tensors=computation.captured_inputs,\n Tout=[o.type for o in computation.definition.signature.output_arg],\n num_batch_threads=1,\n max_batch_size=10,\n batch_timeout_micros=100000, # 100ms\n allowed_batch_sizes=[3, 10],\n batching_queue=\"\")\n\n If more than one session.run call is simultaneously trying to compute `b`\n the values of `a` will be gathered, non-deterministically concatenated\n along the first axis, and only one thread will run the computation.\n\n Assumes that all arguments of the function are Tensors which will be batched\n along their first dimension.\n\n Arguments that are captured, are not batched. The session.run call which does\n the concatenation, will use the values of the captured tensors available to it.\n Therefore, typical uses of captured tensors should involve values which remain\n unchanged across session.run calls. Inference is a good example of this.\n\n SparseTensor is not supported. The return value of the decorated function\n must be a Tensor or a list/tuple of Tensors.\n\n \u003c!-- Tabular view --\u003e\n \u003ctable class=\"responsive fixed orange\"\u003e\n \u003ccolgroup\u003e\u003ccol width=\"214px\"\u003e\u003ccol\u003e\u003c/colgroup\u003e\n \u003ctr\u003e\u003cth colspan=\"2\"\u003e\u003ch2 class=\"add-link\"\u003eArgs\u003c/h2\u003e\u003c/th\u003e\u003c/tr\u003e\n\n \u003ctr\u003e\n \u003ctd\u003e\n `in_tensors`\n \u003c/td\u003e\n \u003ctd\u003e\n A list of `Tensor` objects. The tensors to be batched.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `captured_tensors`\n \u003c/td\u003e\n \u003ctd\u003e\n A list of `Tensor` objects.\n The tensors which are captured in the function, and don't need\n to be batched.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `f`\n \u003c/td\u003e\n \u003ctd\u003e\n A function decorated with @Defun.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `num_batch_threads`\n \u003c/td\u003e\n \u003ctd\u003e\n An `int`.\n Number of scheduling threads for processing batches of work.\n Determines the number of batches processed in parallel.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `max_batch_size`\n \u003c/td\u003e\n \u003ctd\u003e\n An `int`. Batch sizes will never be bigger than this.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `batch_timeout_micros`\n \u003c/td\u003e\n \u003ctd\u003e\n An `int`.\n Maximum number of microseconds to wait before outputting\n an incomplete batch.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `Tout`\n \u003c/td\u003e\n \u003ctd\u003e\n A list of `tf.DTypes` that has length `\u003e= 1`.\n the types of the output tensors.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `max_enqueued_batches`\n \u003c/td\u003e\n \u003ctd\u003e\n An optional `int`. Defaults to `10`.\n Maximum number of batches enqueued. Default: 10.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `allowed_batch_sizes`\n \u003c/td\u003e\n \u003ctd\u003e\n An optional list of `ints`. Defaults to `[]`.\n Optional list of allowed batch sizes. If left empty, does\n nothing. Otherwise, supplies a list of batch sizes, causing the op to pad\n batches up to one of those sizes. The entries must increase monotonically, and\n the final entry must equal max_batch_size.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `container`\n \u003c/td\u003e\n \u003ctd\u003e\n An optional `string`. Defaults to `\"\"`.\n Controls the scope of sharing of this batch.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `shared_name`\n \u003c/td\u003e\n \u003ctd\u003e\n An optional `string`. Defaults to `\"\"`.\n Concurrently running instances of batch in the same device with the\n same container and shared_name will batch their elements together. If left\n empty, the op name will be used as the shared name.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `batching_queue`\n \u003c/td\u003e\n \u003ctd\u003e\n An optional `string`. Defaults to `\"\"`.\n \u003c/td\u003e\n \u003c/tr\u003e\u003ctr\u003e\n \u003ctd\u003e\n `name`\n \u003c/td\u003e\n \u003ctd\u003e\n A name for the operation (optional).\n \u003c/td\u003e\n \u003c/tr\u003e\n \u003c/table\u003e\n\n\n\n \u003c!-- Tabular view --\u003e\n \u003ctable class=\"responsive fixed orange\"\u003e\n \u003ccolgroup\u003e\u003ccol width=\"214px\"\u003e\u003ccol\u003e\u003c/colgroup\u003e\n \u003ctr\u003e\u003cth colspan=\"2\"\u003e\u003ch2 class=\"add-link\"\u003eReturns\u003c/h2\u003e\u003c/th\u003e\u003c/tr\u003e\n \u003ctr class=\"alt\"\u003e\n \u003ctd colspan=\"2\"\u003e\n A list of `Tensor` objects of type `Tout`.\n \u003c/td\u003e\n \u003c/tr\u003e\n\n \u003c/table\u003e"]]