TensorFlow 2.0 Beta is available Learn more

Module: tf.compat.v1.distribute

Library for running a computation across multiple devices.

Modules

cluster_resolver module: Library Imports for Cluster Resolvers.

experimental module: Experimental Distribution Strategy library.

Classes

class CrossDeviceOps: Base class for cross-device reduction and broadcasting algorithms.

class HierarchicalCopyAllReduce: Reduction using hierarchical copy all-reduce.

class InputContext: A class wrapping information needed by an input function.

class InputReplicationMode: Replication mode for input function.

class MirroredStrategy: Mirrors vars to distribute across multiple devices and machines.

class NcclAllReduce: Reduction using NCCL all-reduce.

class OneDeviceStrategy: A distribution strategy for running on a single device.

class ReduceOp: Indicates how a set of values should be reduced.

class ReductionToOneDevice: Always do reduction to one device first and then do broadcasting.

class ReplicaContext: tf.distribute.Strategy API when in a replica context.

class Server: An in-process TensorFlow server, for use in distributed training.

class Strategy: A list of devices with a state & compute distribution policy.

class StrategyExtended: Additional APIs for algorithms that need to be distribution-aware.

Functions

experimental_set_strategy(...): Set a tf.distribute.Strategy as current without with strategy.scope().

get_loss_reduction(...): tf.distribute.ReduceOp corresponding to the last loss reduction.

get_replica_context(...): Returns the current tf.distribute.ReplicaContext or None.

get_strategy(...): Returns the current tf.distribute.Strategy object.

has_strategy(...): Return if there is a current non-default tf.distribute.Strategy.

in_cross_replica_context(...): Returns True if in a cross-replica context.