tfp.experimental.auto_batching.instructions.PrimOp
Stay organized with collections
Save and categorize content based on your preferences.
An arbitrary already-batched computation, a 'primitive operation'.
tfp.experimental.auto_batching.instructions.PrimOp(
vars_in, vars_out, function, skip_push_mask
)
These are the items of work on which auto-batching is applied. The
function
must accept and produce Tensors with a batch dimension,
and is free to stage any (batched) computation it wants.
Restriction: the function
must use the same computation substrate
as the VM backend. That is, if the VM is staging to XLA, the
function
will see XLA Tensor handles; if the VM is staging to
graph-mode TensorFlow, the function
will see TensorFlow Tensors;
etc.
The current values of the vars_out
are saved on their respective
stacks, and the results written to the new top.
The exact contract for function
is as follows:
- It will be invoked with a list of positional (only) arguments,
parallel to
vars_in
.
- Each argument will be a pattern of Tensors (meaning, either one
Tensor or a (potentially nested) list or tuple of Tensors),
corresponding to the
Type
of that variable.
- Each Tensor in the argument will have the
dtype
and shape
given in the corresponding TensorType
, and an additional leading
batch dimension.
- Some indices in the batch dimension may contain junk data, if the
corresponding threads are not executing this instruction [this is
subject to change based on the batch execution strategy].
- The
function
must return a pattern of Tensors, or objects
convertible to Tensors.
- The returned pattern must be compatible with the
Type
s of
vars_out
.
- The Tensors in the returned pattern must have
dtype
and shape
compatible with the corresponding TensorType
s of vars_out
.
- The returned Tensors will be broadcast into their respective
positions if necessary. The broadcasting includes the batch
dimension: Thus, a returned Tensor of insufficient rank (e.g., a
constant) will be broadcast across batch members. In particular,
a Tensor that carries the indended batch size but whose sub-batch
shape is too low rank will broadcast incorrectly, and will result
in an error.
- If the
function
raises an exception, it will propagate and abort
the entire computation.
- Even in the TensorFlow backend, the
function
will be staged
several times: at least twice during type inference (to ascertain
the shapes of the Tensors it likes to return, as a function of the
shapes of the Tensors it is given), and exactly once during
executable graph construction.
Args |
vars_in
|
list of strings. The names of the VM variables whose
current values to pass to the function .
|
vars_out
|
Pattern of strings. The names of the VM variables
where to save the results returned from function .
|
function
|
Python callable implementing the computation.
|
skip_push_mask
|
Set of strings, a subset of vars_out . These VM variables
will be updated in place rather than pushed.
|
Attributes |
vars_in
|
A namedtuple alias for field number 0
|
vars_out
|
A namedtuple alias for field number 1
|
function
|
A namedtuple alias for field number 2
|
skip_push_mask
|
A namedtuple alias for field number 3
|
Methods
replace
View source
replace(
vars_out=None
)
Return a copy of self
with vars_out
replaced.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.auto_batching.instructions.PrimOp\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/instructions.py#L548-L632) |\n\nAn arbitrary already-batched computation, a 'primitive operation'.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tfp.experimental.auto_batching.frontend.instructions.PrimOp`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/PrimOp), [`tfp.experimental.auto_batching.frontend.st.inst.PrimOp`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/PrimOp), [`tfp.experimental.auto_batching.frontend.stack.inst.PrimOp`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/PrimOp), [`tfp.experimental.auto_batching.stack_optimization.inst.PrimOp`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/PrimOp), [`tfp.experimental.auto_batching.stackless.inst.PrimOp`](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/auto_batching/instructions/PrimOp)\n\n\u003cbr /\u003e\n\n tfp.experimental.auto_batching.instructions.PrimOp(\n vars_in, vars_out, function, skip_push_mask\n )\n\nThese are the items of work on which auto-batching is applied. The\n`function` must accept and produce Tensors with a batch dimension,\nand is free to stage any (batched) computation it wants.\nRestriction: the `function` must use the same computation substrate\nas the VM backend. That is, if the VM is staging to XLA, the\n`function` will see XLA Tensor handles; if the VM is staging to\ngraph-mode TensorFlow, the `function` will see TensorFlow Tensors;\netc.\n\nThe current values of the `vars_out` are saved on their respective\nstacks, and the results written to the new top.\n\nThe exact contract for `function` is as follows:\n\n- It will be invoked with a list of positional (only) arguments, parallel to `vars_in`.\n- Each argument will be a pattern of Tensors (meaning, either one Tensor or a (potentially nested) list or tuple of Tensors), corresponding to the `Type` of that variable.\n- Each Tensor in the argument will have the `dtype` and `shape` given in the corresponding `TensorType`, and an additional leading batch dimension.\n- Some indices in the batch dimension may contain junk data, if the corresponding threads are not executing this instruction \\[this is subject to change based on the batch execution strategy\\].\n- The `function` must return a pattern of Tensors, or objects convertible to Tensors.\n- The returned pattern must be compatible with the `Type`s of `vars_out`.\n- The Tensors in the returned pattern must have `dtype` and `shape` compatible with the corresponding `TensorType`s of `vars_out`.\n- The returned Tensors will be broadcast into their respective positions if necessary. The broadcasting *includes the batch\n dimension*: Thus, a returned Tensor of insufficient rank (e.g., a constant) will be broadcast across batch members. In particular, a Tensor that carries the indended batch size but whose sub-batch shape is too low rank will broadcast incorrectly, and will result in an error.\n- If the `function` raises an exception, it will propagate and abort the entire computation.\n- Even in the TensorFlow backend, the `function` will be staged several times: at least twice during type inference (to ascertain the shapes of the Tensors it likes to return, as a function of the shapes of the Tensors it is given), and exactly once during executable graph construction.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------|---------------------------------------------------------------------------------------------------------|\n| `vars_in` | list of strings. The names of the VM variables whose current values to pass to the `function`. |\n| `vars_out` | Pattern of strings. The names of the VM variables where to save the results returned from `function`. |\n| `function` | Python callable implementing the computation. |\n| `skip_push_mask` | Set of strings, a subset of `vars_out`. These VM variables will be updated in place rather than pushed. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|------------------|-----------------------------------------|\n| `vars_in` | A `namedtuple` alias for field number 0 |\n| `vars_out` | A `namedtuple` alias for field number 1 |\n| `function` | A `namedtuple` alias for field number 2 |\n| `skip_push_mask` | A `namedtuple` alias for field number 3 |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `replace`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/auto_batching/instructions.py#L628-L632) \n\n replace(\n vars_out=None\n )\n\nReturn a copy of `self` with `vars_out` replaced."]]