![]() |
![]() |
A distribution strategy for synchronous training on multiple workers.
Inherits From: Strategy
tf.distribute.experimental.MultiWorkerMirroredStrategy(
communication=tf.distribute.experimental.CollectiveCommunication.AUTO,
cluster_resolver=None
)
This strategy implements synchronous distributed training across multiple
workers, each with potentially multiple GPUs. Similar to
tf.distribute.MirroredStrategy
, it creates copies of all variables in the
model on each device across all workers.
It uses CollectiveOps's implementation of multi-worker all-reduce to to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.
By default it uses all local GPUs or CPU for single-worker training.
When 'TF_CONFIG' environment variable is set, it parses cluster_spec, task_type and task_id from 'TF_CONFIG' and turns into a multi-worker strategy which mirrored models on GPUs of all machines in a cluster. In the current implementation, it uses all GPUs in a cluster and it assumes all workers have the same number of GPUs.
You can also pass a distribute.cluster_resolver.ClusterResolver
instance
when instantiating the strategy. The task_type, task_id etc. will be parsed
from the resolver instance instead of from the TF_CONFIG
env var.
It supports both eager mode and graph mode. However, for eager mode, it has to set up the eager context in its constructor and therefore all ops in eager mode have to run after the strategy object is created.
Args | |
---|---|
communication
|
optional Enum of type
distribute.experimental.CollectiveCommunication . This provides a way
for the user to override the choice of collective op communication.
Possible values include AUTO , RING , and NCCL .
|
cluster_resolver
|
optional distribute.cluster_resolver.ClusterResolver
object. The default ClusterResolver that is used is the
TFConfigClusterResolver which is instantiated from the TF_CONFIG env
var.
|
Attributes | |
---|---|
cluster_resolver
|
Returns the cluster resolver associated with this strategy.
As a multi-worker strategy,
|
extended
|
tf.distribute.StrategyExtended with additional methods.
|
num_replicas_in_sync
|
Returns number of replicas over which gradients are aggregated. |
Methods
experimental_assign_to_logical_device
experimental_assign_to_logical_device(
tensor, logical_device_id
)
Adds annotation that tensor
will be assigned to a logical device.
# Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 1, 1, 2],
num_replicas=4)
strategy = tf.distribute.TPUStrategy(
resolver, experimental_device_assignment=device_assignment)
iterator = iter(inputs)
@tf.function()
def step_fn(inputs):
output = tf.add(inputs, inputs)
# Add operation will be executed on logical device 0.
output = strategy.experimental_assign_to_logical_device(output, 0)
return output
strategy.run(step_fn, args=(next(iterator),))
Args | |
---|---|
tensor
|
Input tensor to annotate. |
logical_device_id
|
Id of the logical core to which the tensor will be assigned. |
Raises | |
---|---|
ValueError
|
The logical device id presented is not consistent with total number of partitions specified by the device assignment. |
Returns | |
---|---|
Annotated tensor with idential value as tensor .
|
experimental_distribute_dataset
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset
from tf.data.Dataset
.
The returned tf.distribute.DistributedDataset
can be iterated over
similar to how regular datasets can.
NOTE: The user cannot add any more transformations to a
tf.distribute.DistributedDataset
.
The following is an example:
strategy = tf.distribute.MirroredStrategy()
# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
"/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
strategy.run(replica_fn, args=(x,))
In the code snippet above, the tf.distribute.DistributedDataset
dist_dataset
is batched by GLOBAL_BATCH_SIZE
, and we iterate through it
using for x in dist_dataset
. x
a tf.distribute.DistributedValues
containing data for all replicas, which aggregates to a batch of
GLOBAL_BATCH_SIZE
. tf.distribute.Strategy.run
will take care of feeding
the right per-replica data in x
to the right replica_fn
executed on each
replica.
What's under the hood of this method, when we say the tf.data.Dataset
instance - dataset
- gets distributed? It depends on how you set the
tf.data.experimental.AutoShardPolicy
through
tf.data.experimental.DistributeOptions
. By default, it is set to
tf.data.experimental.AutoShardPolicy.AUTO
. In a multi-worker setting, we
will first attempt to distribute dataset
by detecting whether dataset
is
being created out of reader datasets (e.g. tf.data.TFRecordDataset
,
tf.data.TextLineDataset
, etc.) and if so, try to shard the input files.
Note that there has to be at least one input file per worker. If you have
less than one input file per worker, we suggest that you disable dataset
sharding across workers, by setting the
tf.data.experimental.DistributeOptions.auto_shard_policy
to be
tf.data.experimental.AutoShardPolicy.OFF
.
If the attempt to shard by file is unsuccessful (i.e. the dataset is not
read from files), we will shard the dataset evenly at the end by
appending a .shard
operation to the end of the processing pipeline. This
will cause the entire preprocessing pipeline for all the data to be run on
every worker, and each worker will do redundant work. We will print a
warning if this route is selected.
As mentioned before, within each worker, we will also split the data among all the worker devices (if more than one a present). This will happen even if multi-worker sharding is disabled.
If the above batch splitting and dataset sharding logic is undesirable,
please use
tf.distribute.Strategy.experimental_distribute_datasets_from_function
instead, which does not do any automatic splitting or sharding.
You can also use the element_spec
property of the
tf.distribute.DistributedDataset
instance returned by this API to query
the tf.TypeSpec
of the elements returned
by the iterator. This can be used to set the input_signature
property
of a tf.function
.
strategy = tf.distribute.MirroredStrategy()
# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
"/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function(input_signature=[dist_dataset.element_spec])
def train_step(inputs):
# train model with inputs
return
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
strategy.run(train_step, args=(x,))
Args | |
---|---|
dataset
|
tf.data.Dataset that will be sharded across all replicas using
the rules stated above.
|
options
|
tf.distribute.InputOptions used to control options on how this
dataset is distributed.
|
Returns | |
---|---|
A tf.distribute.DistributedDataset .
|
experimental_distribute_datasets_from_function
experimental_distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset
instances created by calls to dataset_fn
.
dataset_fn
will be called once for each worker in the strategy. Each
replica on that worker will dequeue one batch of inputs from the local
Dataset
(i.e. if a worker has two replicas, two batches will be dequeued
from the Dataset
every step).
This method can be used for several purposes. For example, where
experimental_distribute_dataset
is unable to shard the input files, this
method might be used to manually shard the dataset (avoiding the slow
fallback behavior in experimental_distribute_dataset
). In cases where the
dataset is infinite, this sharding can be done by creating dataset replicas
that differ only in their random seed.
experimental_distribute_dataset
may also sometimes fail to split the
batch across replicas on a worker. In that case, this method can be used
where that limitation does not exist.
The dataset_fn
should take an tf.distribute.InputContext
instance where
information about batching and input replication can be accessed.
You can also use the element_spec
property of the
tf.distribute.DistributedDataset
returned by this API to query the
tf.TypeSpec
of the elements returned by the iterator. This can be used to
set the input_signature
property of a tf.function
.
global_batch_size = 8
def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(
global_batch_size)