tf.distribute.ReductionToOneDevice
Stay organized with collections
Save and categorize content based on your preferences.
Always do reduction to one device first and then do broadcasting.
Inherits From: CrossDeviceOps
tf.distribute.ReductionToOneDevice(
reduce_to_device=None, accumulation_fn=None
)
Batch reduction is done by reduction on each element one by one.
Args |
reduce_to_device
|
the intermediate device to reduce to. If None, reduce
to the first device in destinations of the reduce() method.
|
accumulation_fn
|
a function that does accumulation. If None, then
tf.math.add_n is used.
|
Methods
batch_reduce
View source
batch_reduce(
reduce_op, value_destination_pairs
)
Reduce PerReplica objects in a batch.
Reduce each first element in value_destination_pairs
to each second
element which indicates the destinations.
Args |
reduce_op
|
Indicates how per_replica_value will be reduced. Accepted
values are tf.distribute.ReduceOp.SUM , tf.distribute.ReduceOp.MEAN .
|
value_destination_pairs
|
a list or a tuple of tuples of PerReplica objects
(or tensors with device set if there is one device) and destinations.
|
Returns |
a list of Mirrored objects.
|
Raises |
ValueError
|
if value_destination_pairs is not a list or a tuple of
tuples of PerReplica objects and destinations
|
broadcast
View source
broadcast(
tensor, destinations
)
Broadcast the tensor
to destinations.
Args |
tensor
|
the tensor to broadcast.
|
destinations
|
the broadcast destinations.
|
Returns |
a Mirrored object.
|
reduce
View source
reduce(
reduce_op, per_replica_value, destinations
)
Reduce per_replica_value
to destinations
.
It runs the reduction operation defined by reduce_op
and put the
result on destinations
.
Returns |
a Mirrored object.
|
Raises |
ValueError
|
if per_replica_value can't be converted to a PerReplica
object.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.distribute.ReductionToOneDevice\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/distribute/ReductionToOneDevice) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/distribute/cross_device_ops.py#L400-L436) |\n\nAlways do reduction to one device first and then do broadcasting.\n\nInherits From: [`CrossDeviceOps`](../../tf/distribute/CrossDeviceOps)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.distribute.ReductionToOneDevice`](/api_docs/python/tf/distribute/ReductionToOneDevice)\n\n\u003cbr /\u003e\n\n tf.distribute.ReductionToOneDevice(\n reduce_to_device=None, accumulation_fn=None\n )\n\nBatch reduction is done by reduction on each element one by one.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|---------------------------------------------------------------------------------------------------------------------|\n| `reduce_to_device` | the intermediate device to reduce to. If None, reduce to the first device in `destinations` of the reduce() method. |\n| `accumulation_fn` | a function that does accumulation. If None, then [`tf.math.add_n`](../../tf/math/add_n) is used. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `batch_reduce`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/distribute/cross_device_ops.py#L284-L324) \n\n batch_reduce(\n reduce_op, value_destination_pairs\n )\n\nReduce PerReplica objects in a batch.\n\nReduce each first element in `value_destination_pairs` to each second\nelement which indicates the destinations.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `reduce_op` | Indicates how per_replica_value will be reduced. Accepted values are [`tf.distribute.ReduceOp.SUM`](../../tf/distribute/ReduceOp#SUM), [`tf.distribute.ReduceOp.MEAN`](../../tf/distribute/ReduceOp#MEAN). |\n| `value_destination_pairs` | a list or a tuple of tuples of PerReplica objects (or tensors with device set if there is one device) and destinations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| a list of Mirrored objects. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|--------------------------------------------------------------------------------------------------------|\n| `ValueError` | if `value_destination_pairs` is not a list or a tuple of tuples of PerReplica objects and destinations |\n\n\u003cbr /\u003e\n\n### `broadcast`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/distribute/cross_device_ops.py#L326-L337) \n\n broadcast(\n tensor, destinations\n )\n\nBroadcast the `tensor` to destinations.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------|-----------------------------|\n| `tensor` | the tensor to broadcast. |\n| `destinations` | the broadcast destinations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| a Mirrored object. ||\n\n\u003cbr /\u003e\n\n### `reduce`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/distribute/cross_device_ops.py#L248-L282) \n\n reduce(\n reduce_op, per_replica_value, destinations\n )\n\nReduce `per_replica_value` to `destinations`.\n\nIt runs the reduction operation defined by `reduce_op` and put the\nresult on `destinations`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `reduce_op` | Indicates how per_replica_value will be reduced. Accepted values are [`tf.distribute.ReduceOp.SUM`](../../tf/distribute/ReduceOp#SUM), [`tf.distribute.ReduceOp.MEAN`](../../tf/distribute/ReduceOp#MEAN). |\n| `per_replica_value` | a PerReplica object or a tensor with device set. |\n| `destinations` | the reduction destinations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| a Mirrored object. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|-----------------------------------------------------------------|\n| `ValueError` | if per_replica_value can't be converted to a PerReplica object. |\n\n\u003cbr /\u003e"]]