tf.distribute.experimental.partitioners.MinSizePartitioner
Stay organized with collections
Save and categorize content based on your preferences.
Partitioner that allocates a minimum size per shard.
Inherits From: Partitioner
tf.distribute.experimental.partitioners.MinSizePartitioner(
min_shard_bytes=(256 << 10), max_shards=1, bytes_per_string=16
)
Used in the notebooks
This partitioner ensures each shard has at least min_shard_bytes
, and tries
to allocate as many shards as possible, i.e., keeping shard size as small as
possible. The maximum number of such shards (upper bound) is given by
max_shards
.
Examples:
partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=2)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[2, 1]
partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=10)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[6, 1]
# use in ParameterServerStrategy
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# cluster_resolver=cluster_resolver, variable_partitioner=partitioner)
Args |
min_shard_bytes
|
Minimum bytes of each shard. Defaults to 256K.
|
max_shards
|
Upper bound on the number of shards. Defaults to 1.
|
bytes_per_string
|
If the partition value is of type string, this provides
an estimate of how large each string is.
|
Methods
__call__
View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape
and returns the partition results.
Examples of a partitioner that allocates a fixed number of shards:
partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args |
shape
|
a tf.TensorShape , the shape to partition.
|
dtype
|
a tf.dtypes.Dtype indicating the type of the partition value.
|
axis
|
The axis to partition along. Default: outermost axis.
|
Returns |
A list of integers representing the number of partitions on each axis,
where i-th value correponds to i-th axis.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.distribute.experimental.partitioners.MinSizePartitioner\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/distribute/sharded_variable.py#L114-L172) |\n\nPartitioner that allocates a minimum size per shard.\n\nInherits From: [`Partitioner`](../../../../tf/distribute/experimental/partitioners/Partitioner) \n\n tf.distribute.experimental.partitioners.MinSizePartitioner(\n min_shard_bytes=(256 \u003c\u003c 10), max_shards=1, bytes_per_string=16\n )\n\n### Used in the notebooks\n\n| Used in the tutorials |\n|---------------------------------------------------------------------------------------------------------------------------------------|\n| - [Parameter server training with ParameterServerStrategy](https://www.tensorflow.org/tutorials/distribute/parameter_server_training) |\n\nThis partitioner ensures each shard has at least `min_shard_bytes`, and tries\nto allocate as many shards as possible, i.e., keeping shard size as small as\npossible. The maximum number of such shards (upper bound) is given by\n`max_shards`.\n\n#### Examples:\n\n partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=2)\n partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n [2, 1]\n partitioner = MinSizePartitioner(min_shard_bytes=4, max_shards=10)\n partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n [6, 1]\n\n # use in ParameterServerStrategy\n # strategy = tf.distribute.experimental.ParameterServerStrategy(\n # cluster_resolver=cluster_resolver, variable_partitioner=partitioner)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|--------------------------------------------------------------------------------------------------|\n| `min_shard_bytes` | Minimum bytes of each shard. Defaults to 256K. |\n| `max_shards` | Upper bound on the number of shards. Defaults to 1. |\n| `bytes_per_string` | If the partition value is of type string, this provides an estimate of how large each string is. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/distribute/sharded_variable.py#L166-L172) \n\n __call__(\n shape, dtype, axis=0\n )\n\nPartitions the given `shape` and returns the partition results.\n\nExamples of a partitioner that allocates a fixed number of shards: \n\n partitioner = FixedShardsPartitioner(num_shards=2)\n partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)\n print(partitions) # [2, 0]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------|---------------------------------------------------------------------------|\n| `shape` | a [`tf.TensorShape`](../../../../tf/TensorShape), the shape to partition. |\n| `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. |\n| `axis` | The axis to partition along. Default: outermost axis. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. ||\n\n\u003cbr /\u003e"]]