![]() |
![]() |
A state & compute distribution policy on a list of devices.
tf.distribute.Strategy(
extended
)
See the guide for overview and examples.
In short:
- To use it with Keras
compile
/fit
, please read. - You may pass descendant of
tf.distribute.Strategy
totf.estimator.RunConfig
to specify how atf.estimator.Estimator
should distribute its computation. See guide. - Otherwise, use
tf.distribute.Strategy.scope
to specify that a strategy should be used when building an executing your model. (This puts you in the "cross-replica context" for this strategy, which means the strategy is put in control of things like variable placement.) If you are writing a custom training loop, you will need to call a few more methods, see the guide:
Start by either creating a
tf.data.Dataset
normally or usingtf.distribute.experimental_make_numpy_dataset
to make a dataset out of anumpy
array.Use
tf.distribute.Strategy.experimental_distribute_dataset
to convert atf.data.Dataset
to something that produces "per-replica" values. If you want to manually specify how the dataset should be partitioned across replicas, usetf.distribute.Strategy.experimental_distribute_datasets_from_function
instead.Use
tf.distribute.Strategy.run
to run a function once per replica, taking values that may be "per-replica" (e.g. from a distributed dataset) and returning "per-replica" values. This function is executed in "replica context", which means each operation is performed separately on each replica.Finally use a method (such as
tf.distribute.Strategy.reduce
) to convert the resulting "per-replica" values into ordinaryTensor
s.
A custom training loop can be as simple as:
with my_strategy.scope():
@tf.function
def distribute_train_epoch(dataset):
def replica_fn(input):
# process input and return result
return result
total_result = 0
for x in dataset:
per_replica_result = my_strategy.run(replica_fn, args=(x,))
total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,
per_replica_result, axis=None)
return total_result
dist_dataset = my_strategy.experimental_distribute_dataset(dataset)
for _ in range(EPOCHS):
train_result = distribute_train_epoch(dist_dataset)
This takes an ordinary dataset
and replica_fn
and runs it
distributed using a particular tf.distribute.Strategy
named
my_strategy
above. Any variables created in replica_fn
are created
using my_strategy
's policy, and library functions called by
replica_fn
can use the get_replica_context()
API to implement
distributed-specific behavior.
You can use the reduce
API to aggregate results across replicas and use
this as a return value from one iteration over the distributed dataset. Or
you can use tf.keras.metrics
(such as loss, accuracy, etc.) to
accumulate metrics across steps in a given epoch.
See the custom training loop tutorial for a more detailed example.
Attributes | |
---|---|
extended
|
tf.distribute.StrategyExtended with additional methods.
|
num_replicas_in_sync
|
Returns number of replicas over which gradients are aggregated. |
Methods
experimental_assign_to_logical_device
experimental_assign_to_logical_device(
tensor, logical_device_id
)
Adds annotation that tensor
will be assigned to a logical device.
# Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
topology,
computation_shape=[1, 1, 2],
num_replicas=4)
strategy = tf.distribute.experimental.TPUStrategy(
resolver, device_assignment=device_assignment)
iterator = iter(inputs)
@tf.function()
def step_fn(inputs):
output = tf.add(inputs, inputs)
// Add operation will be executed on logical device 0.
output = strategy.experimental_assign_to_logical_device(output, 0)
return output
strategy.run(step_fn, args=(next(iterator),))
Args | |
---|---|
tensor
|
Input tensor to annotate. |
logical_device_id
|
Id of the logical core to which the tensor will be assigned. |
Raises | |
---|---|
ValueError
|
The logical device id presented is not consistent with total number of partitions specified by the device assignment. |
Returns | |
---|---|
Annotated tensor with idential value as tensor .
|
experimental_distribute_dataset
experimental_distribute_dataset(
dataset
)
Distributes a tf.data.Dataset instance provided via dataset
.
The returned distributed dataset can be iterated over similar to how regular datasets can. NOTE: Currently, the user cannot add any more transformations to a distributed dataset.
The following is an example:
strategy = tf.distribute.MirroredStrategy()
# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
"/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
# Iterate over the distributed dataset
for x in dist_dataset:
# process dataset elements
strategy.run(train_step, args=(x,))
We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers).
In a multi-worker setting, we will first attempt to distribute the dataset by attempting to detect whether the dataset is being created out of ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, attempting to shard the input files. Note that there has to be at least one input file per worker. If you have less than one input file per worker, we suggest that you should disable distributing your dataset using the method below.
If that attempt is unsuccessful (e.g. the dataset is created from a
Dataset.range), we will shard the dataset evenly at the end by appending a
.shard
operation to the end of the processing pipeline. This will cause
the entire preprocessing pipeline for all the data to be run on every
worker, and each worker will do redundant work. We will print a warning
if this method of sharding is selected.
You can disable dataset sharding across workers using the
auto_shard_policy
option in tf.data.experimental.DistributeOptions
.
Within each worker, we will also split the data among all the worker devices (if more than one a present), and this will happen even if multi-worker sharding is disabled using the method above.
If the above batch splitting and dataset sharding logic is undesirable,
please use experimental_distribute_datasets_from_function
instead, which
does not do any automatic splitting or sharding.
You can also use the element_spec
property of the distributed dataset
returned by this API to query the tf.TypeSpec
of the elements returned
by the iterator. This can be used to set the input_signature
property
of a tf.function
.
strategy = tf.distribute.MirroredStrategy()
# Create a dataset
dataset = dataset_ops.Dataset.TFRecordDataset([
"/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"])
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function(input_signature=[dist_dataset.element_spec])
def train_step(inputs):
# train model with inputs
return
# Iterate over the distributed dataset
for x in dist_dataset:
# process dataset elements
strategy.run(train_step, args=(x,))
Args | |
---|---|
dataset
|
tf.data.Dataset that will be sharded across all replicas using
the rules stated above.
|
Returns | |
---|---|
A "distributed Dataset ", which acts like a tf.data.Dataset except
it produces "per-replica" values.
|
experimental_distribute_datasets_from_function
experimental_distribute_datasets_from_function(
dataset_fn
)
Distributes tf.data.Dataset
instances created by calls to dataset_fn
.
dataset_fn
will be called once for each worker in the strategy. Each
replica on that worker will dequeue one batch of inputs from the local
Dataset
(i.e. if a worker has two replicas, two batches will be dequeued
from the Dataset
every step).
This method can be used for several purposes. For example, where
experimental_distribute_dataset
is unable to shard the input files, this
method might be used to manually shard the dataset (avoiding the slow
fallback behavior in experimental_distribute_dataset
). In cases where the
dataset is infinite, this sharding can be done by creating dataset replicas
that differ only in their random seed.
experimental_distribute_dataset
may also sometimes fail to split the
batch across replicas on a worker. In that case, this method can be used
where that limitation does not exist.
The dataset_fn
should take an tf.distribute.InputContext
instance where
information about batching and input replication can be accessed:
def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(
input_context.num_input_pipelines, input_context.input_pipeline_id)
inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn)
for batch in inputs:
replica_results = strategy.run(replica_fn, args=(batch,))
To query the tf.TypeSpec
of the elements in the distributed dataset
returned by this API, you need to use the element_spec
property of the
distributed iterator. This tf.TypeSpec
can be used to set the
input_signature
property of a tf.function
.
# If you want to specify `input_signature` for a `tf.function` you must
# first create the iterator.
iterator = iter(inputs)
@tf.function(input_signature=[iterator.element_spec])
def replica_fn_with_signature(inputs):
# train the model with inputs
return
for _ in range(steps):
strategy.run(replica_fn_with_signature,
args=(next(iterator),))
Args | |
---|---|
dataset_fn
|
A function taking a tf.distribute.InputContext instance and
returning a tf.data.Dataset .
|
Returns | |
---|---|
A "distributed Dataset ", which acts like a tf.data.Dataset except
it produces "per-replica" values.
|
experimental_distribute_values_from_function
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues
from value_fn
.
This function is to generate tf.distribute.DistributedValues
to pass
into run
, reduce
, or other methods that take
distributed values when not using datasets.
Args | |
---|---|
value_fn
|
The function to run to generate values. It is called for
each replica with tf.distribute.ValueContext as the sole argument. It
must return a Tensor or a type that can be converted to a Tensor.
|
Returns | |
---|---|
A tf.distribute.DistributedValues containing a value for each replica.
|
Example usage:
- Return constant value per replica:
strategy = tf.distribute.MirroredStrategy()
def value_fn(ctx):
return tf.constant(1.)