|View source on GitHub|
TPU distribution strategy implementation.
__init__( tpu_cluster_resolver=None, steps_per_run=None, device_assignment=None )
Initializes the TPUStrategy object.
tpu_cluster_resolver: A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster.
steps_per_run: Number of steps to run on device before returning to the host. Note that this can have side-effects on performance, hooks, metrics, summaries etc. This parameter is only used when Distribution Strategy is used with estimator or keras.
tf.contrib.tpu.DeviceAssignmentto specify the placement of replicas on the TPU cluster. Currently only supports the usecase of using a single core within a TPU cluster.
tf.distribute.StrategyExtended with additional methods.
Returns number of replicas over which gradients are aggregated.
Distributes a tf.data.Dataset instance provided via
In a multi-worker setting, we will first attempt to distribute the dataset by attempting to detect whether the dataset is being created out of ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, attempting to shard the input files. Note that there has to be at least one input file per worker. If you have less than one input file per worker, we suggest that you should disable distributing your dataset using the method below.
If that attempt is unsuccessful (e.g. the dataset is created from a
Dataset.range), we will shard the dataset evenly at the end by appending a
.shard operation to the end of the processing pipeline. This will cause
the entire preprocessing pipeline for all the data to be run on every
worker, and each worker will do redundant work. We will print a warning
if this method of sharding is selected.
You can disable dataset distribution using the
auto_shard option in
Within each host, we will also split the data among all the worker devices (if more than one a present), and this will happen even if multi-worker sharding is disabled using the method above.
The following is an example:
strategy = tf.distribute.MirroredStrategy() # Create a dataset dataset = dataset_ops.Dataset.TFRecordDataset([ "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", /a/4.tfr"]) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) # Iterate over the distributed dataset for x in dist_dataset: # process dataset elements strategy.experimental_run_v2(train_step, args=(x,))
tf.data.Datasetthat will be sharded across all replicas using the rules stated above.
DistributedDataset which returns inputs for each step of the
Returns the list of all local per-replica values contained in
value: A value returned by
extended.call_for_each_replica(), or a variable created in
A tuple of values contained in
value represents a single
value, this returns
Makes a dataset for input provided via a numpy array.
This avoids adding
numpy_input as a large constant in the graph,
and copies the data to the machine or machines that will be processing
numpy_input: A nest of NumPy input arrays that will be distributed evenly across all replicas. Note that lists of Numpy arrays are stacked, as that is normal
experimental_run_v2( fn, args=(), kwargs=None )
See base class.
reduce( reduce_op, value, axis )
value across replicas.
Given a per-replica value returned by
experimental_run_v2, say a
per-example loss, the batch will be divided across all the replicas. This
function allows you to aggregate across replicas and optionally also across
batch elements. For example, if you have a global batch size of 8 and 2
replicas, values for examples
[0, 1, 2, 3] will be on replica 0 and
[4, 5, 6, 7] will be on replica 1. By default,
reduce will just
aggregate across replicas, returning
[0+4, 1+5, 2+6, 3+7]. This is useful
when each replica is computing a scalar or some other value that doesn't
have a "batch" dimension (like a gradient). More often you will want to
aggregate across the global batch, which you can get by specifying the batch
dimension as the
axis=0. In this case it would return a
If there is a last partial batch, you will need to specify an axis so
that the resulting shape is consistent across replicas. So if the last
batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you
would get a shape mismatch unless you specify
axis=0. If you specify
axis=0 will use the correct
denominator of 6. Contrast this with computing
reduce_mean to get a
scalar value on each replica and this function to average those means,
which will weigh some values
1/8 and others
tf.distribute.ReduceOpvalue specifying how values should be combined.
value: A "per replica" value, e.g. returned by
experimental_run_v2to be combined into a single tensor.
axis: Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or
Noneto only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns a context manager selecting this Strategy as current.
with strategy.scope(): code block, this thread
will use a variable creator set by
strategy, and will
enter its "cross-replica context".
A context manager.