tf.distribute.experimental.partitioners.FixedShardsPartitioner
Stay organized with collections
Save and categorize content based on your preferences.
Partitioner that allocates a fixed number of shards.
Inherits From: Partitioner
tf.distribute.experimental.partitioners.FixedShardsPartitioner(
num_shards
)
Examples:
# standalone usage:
partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3]), tf.float32)
[2, 1]
# use in ParameterServerStrategy
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# cluster_resolver=cluster_resolver, variable_partitioner=partitioner)
Args |
num_shards
|
int , number of shards to partition.
|
Methods
__call__
View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape
and returns the partition results.
Examples of a partitioner that allocates a fixed number of shards:
partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args |
shape
|
a tf.TensorShape , the shape to partition.
|
dtype
|
a tf.dtypes.Dtype indicating the type of the partition value.
|
axis
|
The axis to partition along. Default: outermost axis.
|
Returns |
A list of integers representing the number of partitions on each axis,
where i-th value correponds to i-th axis.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.distribute.experimental.partitioners.FixedShardsPartitioner\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/distribute/sharded_variable.py#L83-L111) |\n\nPartitioner that allocates a fixed number of shards.\n\nInherits From: [`Partitioner`](../../../../tf/distribute/experimental/partitioners/Partitioner) \n\n tf.distribute.experimental.partitioners.FixedShardsPartitioner(\n num_shards\n )\n\n#### Examples:\n\n # standalone usage:\n partitioner = FixedShardsPartitioner(num_shards=2)\n partitions = partitioner(tf.TensorShape([10, 3]), tf.float32)\n [2, 1]\n\n # use in ParameterServerStrategy\n # strategy = tf.distribute.experimental.ParameterServerStrategy(\n # cluster_resolver=cluster_resolver, variable_partitioner=partitioner)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|---------------------------------------|\n| `num_shards` | `int`, number of shards to partition. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/distribute/sharded_variable.py#L107-L111) \n\n __call__(\n shape, dtype, axis=0\n )\n\nPartitions the given `shape` and returns the partition results.\n\nExamples of a partitioner that allocates a fixed number of shards: \n\n partitioner = FixedShardsPartitioner(num_shards=2)\n partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)\n print(partitions) # [2, 0]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------|---------------------------------------------------------------------------|\n| `shape` | a [`tf.TensorShape`](../../../../tf/TensorShape), the shape to partition. |\n| `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. |\n| `axis` | The axis to partition along. Default: outermost axis. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. ||\n\n\u003cbr /\u003e"]]