tf.distribute.experimental.partitioners.MaxSizePartitioner
Stay organized with collections
Save and categorize content based on your preferences.
Partitioner that keeps shards below max_shard_bytes
.
Inherits From: Partitioner
tf.distribute.experimental.partitioners.MaxSizePartitioner(
max_shard_bytes, max_shards=None, bytes_per_string=16
)
This partitioner ensures each shard has at most max_shard_bytes
, and tries
to allocate as few shards as possible, i.e., keeping shard size as large
as possible.
If the partitioner hits the max_shards
limit, then each shard may end up
larger than max_shard_bytes
. By default max_shards
equals None
and no
limit on the number of shards is enforced.
Examples:
partitioner = MaxSizePartitioner(max_shard_bytes=4)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[6, 1]
partitioner = MaxSizePartitioner(max_shard_bytes=4, max_shards=2)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[2, 1]
partitioner = MaxSizePartitioner(max_shard_bytes=1024)
partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)
[1, 1]
# use in ParameterServerStrategy
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# cluster_resolver=cluster_resolver, variable_partitioner=partitioner)
Args |
max_shard_bytes
|
The maximum size any given shard is allowed to be.
|
max_shards
|
The maximum number of shards in int created taking
precedence over max_shard_bytes .
|
bytes_per_string
|
If the partition value is of type string, this provides
an estimate of how large each string is.
|
Methods
__call__
View source
__call__(
shape, dtype, axis=0
)
Partitions the given shape
and returns the partition results.
Examples of a partitioner that allocates a fixed number of shards:
partitioner = FixedShardsPartitioner(num_shards=2)
partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)
print(partitions) # [2, 0]
Args |
shape
|
a tf.TensorShape , the shape to partition.
|
dtype
|
a tf.dtypes.Dtype indicating the type of the partition value.
|
axis
|
The axis to partition along. Default: outermost axis.
|
Returns |
A list of integers representing the number of partitions on each axis,
where i-th value correponds to i-th axis.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.distribute.experimental.partitioners.MaxSizePartitioner\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/distribute/sharded_variable.py#L175-L239) |\n\nPartitioner that keeps shards below `max_shard_bytes`.\n\nInherits From: [`Partitioner`](../../../../tf/distribute/experimental/partitioners/Partitioner) \n\n tf.distribute.experimental.partitioners.MaxSizePartitioner(\n max_shard_bytes, max_shards=None, bytes_per_string=16\n )\n\nThis partitioner ensures each shard has at most `max_shard_bytes`, and tries\nto allocate as few shards as possible, i.e., keeping shard size as large\nas possible.\n\nIf the partitioner hits the `max_shards` limit, then each shard may end up\nlarger than `max_shard_bytes`. By default `max_shards` equals `None` and no\nlimit on the number of shards is enforced.\n\n#### Examples:\n\n partitioner = MaxSizePartitioner(max_shard_bytes=4)\n partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n [6, 1]\n partitioner = MaxSizePartitioner(max_shard_bytes=4, max_shards=2)\n partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n [2, 1]\n partitioner = MaxSizePartitioner(max_shard_bytes=1024)\n partitions = partitioner(tf.TensorShape([6, 1]), tf.float32)\n [1, 1]\n\n # use in ParameterServerStrategy\n # strategy = tf.distribute.experimental.ParameterServerStrategy(\n # cluster_resolver=cluster_resolver, variable_partitioner=partitioner)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|--------------------------------------------------------------------------------------------------|\n| `max_shard_bytes` | The maximum size any given shard is allowed to be. |\n| `max_shards` | The maximum number of shards in `int` created taking precedence over `max_shard_bytes`. |\n| `bytes_per_string` | If the partition value is of type string, this provides an estimate of how large each string is. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/distribute/sharded_variable.py#L233-L239) \n\n __call__(\n shape, dtype, axis=0\n )\n\nPartitions the given `shape` and returns the partition results.\n\nExamples of a partitioner that allocates a fixed number of shards: \n\n partitioner = FixedShardsPartitioner(num_shards=2)\n partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0)\n print(partitions) # [2, 0]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------|---------------------------------------------------------------------------|\n| `shape` | a [`tf.TensorShape`](../../../../tf/TensorShape), the shape to partition. |\n| `dtype` | a `tf.dtypes.Dtype` indicating the type of the partition value. |\n| `axis` | The axis to partition along. Default: outermost axis. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis. ||\n\n\u003cbr /\u003e"]]