tf.distribute.DistributedValues
Stay organized with collections
Save and categorize content based on your preferences.
Base class for representing distributed values.
tf.distribute.DistributedValues(
values
)
A subclass instance of tf.distribute.DistributedValues
is created when
creating variables within a distribution strategy, iterating a
tf.distribute.DistributedDataset
or through tf.distribute.Strategy.run
.
This base class should never be instantiated directly.
tf.distribute.DistributedValues
contains a value per replica. Depending on
the subclass, the values could either be synced on update, synced on demand,
or never synced.
tf.distribute.DistributedValues
can be reduced to obtain single value across
replicas, as input into tf.distribute.Strategy.run
or the per-replica values
inspected using tf.distribute.Strategy.experimental_local_results
.
Example usage:
- Created from a
tf.distribute.DistributedDataset
:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
- Returned by
run
:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
ctx = tf.distribute.get_replica_context()
return ctx.replica_id_in_sync_group
distributed_values = strategy.run(run)
- As input into
run
:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
@tf.function
def run(input):
return input + 1.0
updated_value = strategy.run(run, args=(distributed_values,))
- Reduce value:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
distributed_values = next(dataset_iterator)
reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM,
distributed_values,
axis = 0)
- Inspect local replica values:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)
dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))
per_replica_values = strategy.experimental_local_results(
distributed_values)
per_replica_values
(<tf.Tensor: shape=(1,), dtype=float32, numpy=array([5.], dtype=float32)>,
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([6.], dtype=float32)>)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-03-17 UTC.
[null,null,["Last updated 2023-03-17 UTC."],[],[],null,["# tf.distribute.DistributedValues\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.3/tensorflow/python/distribute/values.py#L125-L240) |\n\nBase class for representing distributed values. \n\n tf.distribute.DistributedValues(\n values\n )\n\nA subclass instance of [`tf.distribute.DistributedValues`](../../tf/distribute/DistributedValues) is created when\ncreating variables within a distribution strategy, iterating a\n[`tf.distribute.DistributedDataset`](../../tf/distribute/DistributedDataset) or through [`tf.distribute.Strategy.run`](../../tf/distribute/Strategy#run).\nThis base class should never be instantiated directly.\n[`tf.distribute.DistributedValues`](../../tf/distribute/DistributedValues) contains a value per replica. Depending on\nthe subclass, the values could either be synced on update, synced on demand,\nor never synced.\n\n[`tf.distribute.DistributedValues`](../../tf/distribute/DistributedValues) can be reduced to obtain single value across\nreplicas, as input into [`tf.distribute.Strategy.run`](../../tf/distribute/Strategy#run) or the per-replica values\ninspected using [`tf.distribute.Strategy.experimental_local_results`](../../tf/distribute/Strategy#experimental_local_results).\n\n#### Example usage:\n\n1. Created from a [`tf.distribute.DistributedDataset`](../../tf/distribute/DistributedDataset):\n\n strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n distributed_values = next(dataset_iterator)\n\n1. Returned by `run`:\n\n strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n @tf.function\n def run():\n ctx = tf.distribute.get_replica_context()\n return ctx.replica_id_in_sync_group\n distributed_values = strategy.run(run)\n\n1. As input into `run`:\n\n strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n distributed_values = next(dataset_iterator)\n @tf.function\n def run(input):\n return input + 1.0\n updated_value = strategy.run(run, args=(distributed_values,))\n\n1. Reduce value:\n\n strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n distributed_values = next(dataset_iterator)\n reduced_value = strategy.reduce(tf.distribute.ReduceOp.SUM,\n distributed_values,\n axis = 0)\n\n1. Inspect local replica values:\n\n strategy = tf.distribute.MirroredStrategy([\"GPU:0\", \"GPU:1\"])\n dataset = tf.data.Dataset.from_tensor_slices([5., 6., 7., 8.]).batch(2)\n dataset_iterator = iter(strategy.experimental_distribute_dataset(dataset))\n per_replica_values = strategy.experimental_local_results(\n distributed_values)\n per_replica_values\n (\u003ctf.Tensor: shape=(1,), dtype=float32, numpy=array([5.], dtype=float32)\u003e,\n \u003ctf.Tensor: shape=(1,), dtype=float32, numpy=array([6.], dtype=float32)\u003e)"]]