Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is
merged.shape=[max(indices)]+constant
Values may be merged in parallel, so if an index appears in both indices[m][i] and indices[n][j], the result may be invalid. This differs from the normal DynamicStitch operator that defines the behavior in that case.
This method can be used to merge partitions created by dynamic_partition as illustrated on the following example:
# Apply function (increments x_i) on elements for which a certain condition# apply (x_i != -1 in this example).x=tf.constant([0.1,-1.,5.2,4.3,-1.,7.4])condition_mask=tf.not_equal(x,tf.constant(-1.))partitioned_data=tf.dynamic_partition(x,tf.cast(condition_mask,tf.int32),2)partitioned_data[1]=partitioned_data[1]+1.0condition_indices=tf.dynamic_partition(tf.range(tf.shape(x)[0]),tf.cast(condition_mask,tf.int32),2)x=tf.dynamic_stitch(condition_indices,partitioned_data)# Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain# unchanged.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tensorflow::ops::ParallelDynamicStitch Class Reference\n\ntensorflow::ops::ParallelDynamicStitch\n======================================\n\n`#include \u003cdata_flow_ops.h\u003e`\n\nInterleave the values from the `data` tensors into a single tensor.\n\nSummary\n-------\n\nBuilds a merged tensor such that\n\n\n```transact-sql\n merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]\n```\n\n\u003cbr /\u003e\n\nFor example, if each `indices[m]` is scalar or vector, we have\n\n\n```transact-sql\n # Scalar indices:\n merged[indices[m], ...] = data[m][...]\n```\n\n\u003cbr /\u003e\n\n\n```transact-sql\n # Vector indices:\n merged[indices[m][i], ...] = data[m][i, ...]\n```\n\n\u003cbr /\u003e\n\nEach `data[i].shape` must start with the corresponding `indices[i].shape`, and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we must have `data[i].shape = indices[i].shape + constant`. In terms of this `constant`, the output shape is \n\n```gdscript\nmerged.shape = [max(indices)] + constant\n```\n\n\u003cbr /\u003e\n\nValues may be merged in parallel, so if an index appears in both `indices[m][i]` and `indices[n][j]`, the result may be invalid. This differs from the normal [DynamicStitch](/versions/r2.14/api_docs/cc/class/tensorflow/ops/dynamic-stitch#classtensorflow_1_1ops_1_1_dynamic_stitch) operator that defines the behavior in that case.\n\nFor example:\n\n\n```text\n indices[0] = 6\n indices[1] = [4, 1]\n indices[2] = [[5, 2], [0, 3]]\n data[0] = [61, 62]\n data[1] = [[41, 42], [11, 12]]\n data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]\n merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],\n [51, 52], [61, 62]]\n```\n\n\u003cbr /\u003e\n\nThis method can be used to merge partitions created by `dynamic_partition` as illustrated on the following example:\n\n\n```gdscript\n # Apply function (increments x_i) on elements for which a certain condition\n # apply (x_i != -1 in this example).\n x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])\n condition_mask=tf.not_equal(x,tf.constant(-1.))\n partitioned_data = tf.dynamic_partition(\n x, tf.cast(condition_mask, tf.int32) , 2)\n partitioned_data[1] = partitioned_data[1] + 1.0\n condition_indices = tf.dynamic_partition(\n tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)\n x = tf.dynamic_stitch(condition_indices, partitioned_data)\n # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain\n # unchanged.\n```\n\n\u003cbr /\u003e\n\n\n\u003cbr /\u003e\n\nArgs:\n\n- scope: A [Scope](/versions/r2.14/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r2.14/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): The merged tensor.\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ParallelDynamicStitch](#classtensorflow_1_1ops_1_1_parallel_dynamic_stitch_1a6d5464f1c148b04bc28b9bff03f884d3)`(const ::`[tensorflow::Scope](/versions/r2.14/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::InputList](/versions/r2.14/api_docs/cc/class/tensorflow/input-list#classtensorflow_1_1_input_list)` indices, ::`[tensorflow::InputList](/versions/r2.14/api_docs/cc/class/tensorflow/input-list#classtensorflow_1_1_input_list)` data)` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|\n| [merged](#classtensorflow_1_1ops_1_1_parallel_dynamic_stitch_1acf4ad6fe444ed11732637ae9f1951f16) | `::`[tensorflow::Output](/versions/r2.14/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n| [operation](#classtensorflow_1_1ops_1_1_parallel_dynamic_stitch_1a339e540a99d7624dfdc0236dcaaa7fd0) | [Operation](/versions/r2.14/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_parallel_dynamic_stitch_1af337a4bfc6cb29dc5bf35e4158622436)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_parallel_dynamic_stitch_1aa13a376d3e19711dd994e37a3c97cbc8)`() const ` | |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_parallel_dynamic_stitch_1aa279ea721b609a0870436bf241c90c9f)`() const ` | |\n\nPublic attributes\n-----------------\n\n### merged\n\n```text\n::tensorflow::Output merged\n``` \n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ParallelDynamicStitch\n\n```gdscript\n ParallelDynamicStitch(\n const ::tensorflow::Scope & scope,\n ::tensorflow::InputList indices,\n ::tensorflow::InputList data\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n```"]]