Stay organized with collections
Save and categorize content based on your preferences.
tensorflow::
ops::
DynamicPartition
#include <data_flow_ops.h>
Partitions
data
into
num_partitions
tensors using indices from
partitions
.
Summary
For each index tuple
js
of size
partitions.ndim
, the slice
data[js, ...]
becomes part of
outputs[partitions[js]]
. The slices with
partitions[js] = i
are placed in
outputs[i]
in lexicographic order of
js
, and the first dimension of
outputs[i]
is the number of entries in
partitions
equal to
i
. In detail,
outputs [ i ] . shape = [ sum(partitions == i) ] + data . shape [ partitions.ndim: ]
outputs [ i ] = pack ( [ data[js, ... ] for js if partitions [ js ] == i ] )
data.shape
must start with
partitions.shape
.
For example:
# Scalar partitions.
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = [] # Empty with shape [0, 2]
outputs[1] = [[10, 20]]
# Vector partitions.
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]
See
dynamic_stitch
for an example on how to merge partitions back.
Args:
scope: A
Scope
object
partitions:
Any
shape. Indices in the range
[0, num_partitions)
.
num_partitions: The number of partitions to output.
Returns:
OutputList
: The outputs tensor.
Public attributes
Public functions
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license .
Last updated 2021-05-14 UTC.
[null,null,["Last updated 2021-05-14 UTC."],[],[],null,["# tensorflow::ops::DynamicPartition Class Reference\n\ntensorflow::\nops::\nDynamicPartition\n===================================\n\n`\n#include \u003cdata_flow_ops.h\u003e\n`\n\n\nPartitions\n`\ndata\n`\ninto\n`\nnum_partitions\n`\ntensors using indices from\n`\npartitions\n`\n.\n\nSummary\n-------\n\n\nFor each index tuple\n`\njs\n`\nof size\n`\npartitions.ndim\n`\n, the slice\n`\ndata[js, ...]\n`\nbecomes part of\n`\noutputs[partitions[js]]\n`\n. The slices with\n`\npartitions[js] = i\n`\nare placed in\n`\noutputs[i]\n`\nin lexicographic order of\n`\njs\n`\n, and the first dimension of\n`\noutputs[i]\n`\nis the number of entries in\n`\npartitions\n`\nequal to\n`\ni\n`\n. In detail,\n\n\n```transact-sql\n outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]\n```\n\n\u003cbr /\u003e\n\n\n```transact-sql\n outputs[i] = pack([data[js, ...] for js if partitions[js] == i])\n```\n\n\u003cbr /\u003e\n\n\n`\ndata.shape\n`\nmust start with\n`\npartitions.shape\n`\n.\n\n\nFor example:\n\n\n```scdoc\n # Scalar partitions.\n partitions = 1\n num_partitions = 2\n data = [10, 20]\n outputs[0] = [] # Empty with shape [0, 2]\n outputs[1] = [[10, 20]]\n```\n\n\u003cbr /\u003e\n\n\n```scdoc\n # Vector partitions.\n partitions = [0, 0, 1, 1, 0]\n num_partitions = 2\n data = [10, 20, 30, 40, 50]\n outputs[0] = [10, 20, 50]\n outputs[1] = [30, 40]\n```\n\n\u003cbr /\u003e\n\n\nSee\n`\ndynamic_stitch\n`\nfor an example on how to merge partitions back.\n\n\n\u003cbr /\u003e\n\n\nArgs:\n\n- scope: A [Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- partitions: [Any](/versions/r2.5/api_docs/cc/class/tensorflow/ops/any#classtensorflow_1_1ops_1_1_any) shape. Indices in the range `\n [0, num_partitions)\n ` .\n- num_partitions: The number of partitions to output.\n\n\u003cbr /\u003e\n\n\nReturns:\n\n- `\n OutputList\n ` : The outputs tensor.\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| ` `[DynamicPartition](#classtensorflow_1_1ops_1_1_dynamic_partition_1a3054ef5ab4e012816521a61a98ff1cb8)` (const :: `[tensorflow::Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` data, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` partitions, int64 num_partitions) ` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|\n| ` `[operation](#classtensorflow_1_1ops_1_1_dynamic_partition_1ae34cf25c6a4f479e6eab33dd8d6c7bca)` ` | ` `[Operation](/versions/r2.5/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)` ` |\n| ` `[outputs](#classtensorflow_1_1ops_1_1_dynamic_partition_1ac93870bad9fb8ccd554d368930d608c0)` ` | ` :: `[tensorflow::OutputList](/versions/r2.5/api_docs/cc/group/core#group__core_1gab449e6a3abd500c2f4ea93f9e89ba96c)` ` |\n\n| ### Public functions ||\n|---------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|\n| ` `[operator[]](#classtensorflow_1_1ops_1_1_dynamic_partition_1a69567b14d471387c73d2e2240c59d645)` (size_t index) const ` | ` :: `[tensorflow::Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)` ` |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### outputs\n\n```text\n::tensorflow::OutputList outputs\n``` \n\nPublic functions\n----------------\n\n### DynamicPartition\n\n```gdscript\n DynamicPartition(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input data,\n ::tensorflow::Input partitions,\n int64 num_partitions\n)\n``` \n\n### operator\\[\\]\n\n```gdscript\n::tensorflow::Output operator[](\n size_t index\n) const \n```"]]