When this op finishes, all ops in inputs have finished. This op has no
output.
When operating in a v1-style graph context, ops are not executed in the same
order as specified in the code; TensorFlow will attempt to execute ops in
parallel or in an order convenient to the result it is computing. tf.group
allows you to request that one or more results finish before execution
continues.
tf.group creates a single op (of type NoOp), and then adds appropriate
control dependencies. Thus, c = tf.group(a, b) will compute the same graph
as this:
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.group\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.14.0/tensorflow/python/ops/control_flow_ops.py#L1957-L2033) |\n\nCreate an op that groups multiple operations.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.group`](https://www.tensorflow.org/api_docs/python/tf/group)\n\n\u003cbr /\u003e\n\n tf.group(\n *inputs, **kwargs\n )\n\nWhen this op finishes, all ops in `inputs` have finished. This op has no\noutput.\n| **Note:** *In TensorFlow 2 with eager and/or Autograph, you should not require\n| this method, as ops execute in the expected order thanks to automatic control\n| dependencies.* Only use [`tf.group`](../tf/group) when working with v1 [`tf.Graph`](../tf/Graph) code.\n\nWhen operating in a v1-style graph context, ops are not executed in the same\norder as specified in the code; TensorFlow will attempt to execute ops in\nparallel or in an order convenient to the result it is computing. [`tf.group`](../tf/group)\nallows you to request that one or more results finish before execution\ncontinues.\n\n[`tf.group`](../tf/group) creates a single op (of type `NoOp`), and then adds appropriate\ncontrol dependencies. Thus, `c = tf.group(a, b)` will compute the same graph\nas this: \n\n with tf.control_dependencies([a, b]):\n c = tf.no_op()\n\nSee also [`tf.tuple`](../tf/tuple) and\n[`tf.control_dependencies`](../tf/control_dependencies).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------|---------------------------------------|\n| `*inputs` | Zero or more tensors to group. |\n| `name` | A name for this operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| An Operation that executes all its inputs. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------------|\n| `ValueError` | If an unknown keyword argument is provided. |\n\n\u003cbr /\u003e"]]