|TensorFlow 1 version||View source on GitHub|
Abstract class for all implementations of ClusterResolvers.
Compat aliases for migration
See Migration guide for more details.
This defines the skeleton for all implementations of ClusterResolvers. ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary information to set up distributed training.
By letting TensorFlow communicate with these systems, we will be able to automatically discover and resolve IP addresses for various TensorFlow workers. This will eventually allow us to automatically recover from underlying machine failures and scale TensorFlow worker clusters up and down.
Note to Implementors of
subclass: In addition to these abstract methods, when task_type, task_id, and
rpc_layer attributes are applicable, you should also implement them either as
properties with getters or setters, or directly set the attributes
self._rpc_layer so the base class'
getters and setters are used. See
tf.distribute.clusterresolver.SimpleClusterResolver.init_ for an
In general, multi-client tf.distribute strategies such as
tf.distribute.experimental.MultiWorkerMirroredStrategy require task_type and
task_id properties to be available in the
ClusterResolver they are using. On
the other hand, these concepts are not applicable in single-client strategies,
tf.distribute.experimental.TPUStrategy, because the program is only
expected to be run on one task, so there should not be a need to have code
branches according to task type and task id.
- task_type is the name of the server's current named job (e.g. 'worker', 'ps' in a distributed parameterized training job).
- task_id is the ordinal index of the server within the task type.
- rpc_layer is the protocol used by TensorFlow to communicate with other TensorFlow servers in a distributed environment.
Returns the current environment which TensorFlow is running in.
There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere).
If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect.
Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property.
Returns the task id this
In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example,
For more information, please see
Returns the task type this
In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics.
See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used.
Having access to such information is useful when user needs to run specific code according to task types. For example,
For more information, please see
Retrieve the current state of the cluster and return a
Implementors of this function must take care in ensuring that the ClusterSpec returned is up-to-date at the time of calling this function. This usually means retrieving the information from the underlying cluster management system every time this function is invoked and reconstructing a cluster_spec, rather than attempting to cache anything.
master( task_type=None, task_id=None, rpc_layer=None )
Retrieves the name or URL of the session master.
||(Optional) The type of the TensorFlow task of the master.|
||(Optional) The index of the TensorFlow task of the master.|
||(Optional) The RPC protocol for the given cluster.|
|The name or URL of the session master.|
Implementors of this function must take care in ensuring that the master returned is up-to-date at the time to calling this function. This usually means retrieving the master every time this function is invoked.
num_accelerators( task_type=None, task_id=None, config_proto=None )
Returns the number of accelerator cores per worker.
This returns the number of accelerator cores (such as GPUs and TPUs) available per worker.
Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is d