![]() |
![]() |
Synchronous training across multiple replicas on one machine.
Inherits From: Strategy
tf.distribute.MirroredStrategy(
devices=None, cross_device_ops=None
)
This strategy is typically used for training on one
machine with multiple GPUs. For TPUs, use
tf.distribute.TPUStrategy
. To use MirroredStrategy
with multiple workers,
please refer to tf.distribute.experimental.MultiWorkerMirroredStrategy
.
For example, a variable created under a MirroredStrategy
is a
MirroredVariable
. If no devices are specified in the constructor argument of
the strategy then it will use all the available GPUs. If no GPUs are found, it
will use the available CPUs. Note that TensorFlow treats all CPUs on a
machine as a single device, and uses threads internally for parallelism.
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
x = tf.Variable(1.)
x
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
}
While using distribution strategies, all the variable creation should be done within the strategy's scope. This will replicate the variables across all the replicas and keep them in sync using an all-reduce algorithm.
Variables created inside a MirroredStrategy
which is wrapped with a
tf.function
are still MirroredVariables
.
x = []
@tf.function # Wrap the function with tf.function.
def create_variable():
if not x:
x.append(tf.Variable(1.))
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
create_variable()
print (x[0])
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
}
experimental_distribute_dataset
can be used to distribute the dataset across
the replicas when writing your own training loop. If you are using .fit
and
.compile
methods available in tf.keras
, then tf.keras
will handle the
distribution for you.
For example:
my_strategy = tf.distribute.MirroredStrategy()
with my_strategy.scope():
@tf.function
def distribute_train_epoch(dataset):
def replica_fn(input):
# process input and return result
return result
total_result = 0
for x in dataset:
per_replica_result = my_strategy.run(replica_fn, args=(x,))
total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,
per_replica_result, axis=None)
return total_result
dist_dataset = my_strategy.experimental_distribute_dataset(dataset)
for _ in range(EPOCHS):
train_result = distribute_train_epoch(dist_dataset)
Args | |
---|---|
devices
|
a list of device strings such as ['/gpu:0', '/gpu:1'] . If
None , all available GPUs are used. If no GPUs are found, CPU is used.
|
cross_device_ops
|
optional, a descedant of CrossDeviceOps . If this is not
set, NcclAllReduce() will be used by default. One would customize this
if NCCL isn't available or if a special implementation that exploits
the particular hardware is available.
|
Attributes | |
---|---|
cluster_resolver
|
Returns the cluster resolver associated with this strategy.
In general, when using a multi-worker Strategies that intend to have an associated
Single-worker strategies usually do not have a
The
For more information, please see
|
extended
|
tf.distribute.StrategyExtended with additional methods.
|
num_replicas_in_sync
|
Returns number of replicas over which gradients are aggregated. |
Methods
experimental_assign_to_logical_device
experimental_assign_to_logical_device(
tensor, logical_device_id
)
Adds annotation that tensor
will be assigned to a logical device.
# Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 1, 1, 2],
num_replicas=4)
strategy = tf.distribute.TPUStrategy(
resolver, experimental_device_assignment=device_assignment)
iterator = iter(inputs)
@tf.function()
def step_fn(inputs):
output = tf.add(inputs, inputs)
# Add operation will be executed on logical device 0.
output = strategy.experimental_assign_to_logical_device(output, 0)
return output
strategy.run(step_fn, args=(next(iterator),))
Args | |
---|---|
tensor
|
Input tensor to annotate. |
logical_device_id
|
Id of the logical core to which the tensor will be assigned. |
Raises | |
---|---|
ValueError
|
The logical device id presented is not consistent with total number of partitions specified by the device assignment. |
Returns | |
---|---|
Annotated tensor with idential value as tensor .
|
experimental_distribute_dataset
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset
from tf.data.Dataset
.
The returned tf.distribute.DistributedDataset
can be iterated over
similar to how regular datasets can.
NOTE: The user cannot add any more transformations to a
tf.distribute.DistributedDataset
.
The following is an example:
strategy = tf.distribute.MirroredStrategy()
# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
"/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
strategy.run(replica_fn, args=(x,))
In the code snippet above, the tf.distribute.DistributedDataset
dist_dataset
is batched by GLOBAL_BATCH_SIZE
, and we iterate through it
using for x in dist_dataset
. x
a tf.distribute.DistributedValues
containing data for all replicas, which aggregates to a batch of
GLOBAL_BATCH_SIZE
. tf.distribute.Strategy.run
will take care of feeding
the right per-replica data in x
to the right replica_fn
executed on each
replica.
What's under the hood of this method, when we say the tf.data.Dataset
instance - dataset
- gets distributed? It depends on how you set the
tf.data.experimental.AutoShardPolicy
through
tf.data.experimental.DistributeOptions
. By default, it is set to
tf.data.experimental.AutoShardPolicy.AUTO
. In a multi-worker setting, we
will first attempt to distribute dataset
by detecting whether dataset
is
being created out of reader datasets (e.g. tf.data.TFRecordDataset
,
tf.data.TextLineDataset
, etc.) and if so, try to shard the input files.
Note that there has to be at least one input file per worker. If you have
less than one input file per worker, we suggest that you disable dataset
sharding across workers, by setting the
tf.data.experimental.DistributeOptions.auto_shard_policy
to be
tf.data.experimental.AutoShardPolicy.OFF
.
If the attempt to shard by file is unsuccessful (i.e. the dataset is not
read from files), we will shard the dataset evenly at the end by
appending a .shard
operation to the end of the processing pipeline. This
will cause the entire preprocessing pipeline for all the data to be run on
every worker, and each worker will do redundant work. We will print a
warning if this route is selected.
As mentioned before, within each worker, we will also split the data among all the worker devices (if more than one a present). This will happen even if multi-worker sharding is disabled.
If the above batch splitting and dataset sharding logic is undesirable,
please use
tf.distribute.Strategy.experimental_distribute_datasets_from_function
instead, which does not do any automatic splitting or sharding.
You can also use the element_spec
property of the
tf.distribute.DistributedDataset
instance returned by this API to query
the tf.TypeSpec
of the elements returned
by the iterator. This can be used to set the input_signature
property
of a tf.function
.
strategy = tf.distribute.MirroredStrategy()
# Create a dataset
dataset = dataset_ops.D