Initializes a distributed TPU system for use with TensorFlow.
tf.compat.v1.tpu.initialize_system(
embedding_config: Optional[embedding_pb2.TPUEmbeddingConfiguration] = None,
job: Optional[Text] = None,
compilation_failure_closes_chips: bool = True,
tpu_cancellation_closes_chips: Optional[bool] = None
) -> core_types.Tensor
Args |
embedding_config
|
If not None, a TPUEmbeddingConfiguration proto
describing the desired configuration of the hardware embedding lookup
tables. If embedding_config is None, no hardware embeddings can be used.
|
job
|
The job (the XXX in TensorFlow device specification /job:XXX) that
contains the TPU devices that will be initialized. If job=None it is
assumed there is only one job in the TensorFlow flock, and an error will
be returned if this assumption does not hold.
|
compilation_failure_closes_chips
|
Set the configuration whether
we want to close TPU chips when there is a compilation failure.
|
tpu_cancellation_closes_chips
|
Set the configuration whether
we want to close TPU chips when a TPU execution is cancelled. If the value
is None, the behavior will be determined by the command line flag
tpu_cancellation_closes_chips for the TPU worker. WARNING: this argument
only applies to TFRT TPU runtime.
|
Returns |
A serialized TopologyProto that describes the TPU system. Note:
the topology must be evaluated using Session.run before it can be used.
|