orbit.actions.SaveCheckpointIfPreempted
Stay organized with collections
Save and categorize content based on your preferences.
Action that saves on-demand checkpoints after a preemption.
orbit.actions.SaveCheckpointIfPreempted(
cluster_resolver: tf.distribute.cluster_resolver.ClusterResolver,
checkpoint_manager: tf.train.CheckpointManager,
checkpoint_number: Optional[tf.Variable] = None,
keep_running_after_save: Optional[bool] = False
)
Args |
cluster_resolver
|
A tf.distribute.cluster_resolver.ClusterResolver
object.
|
checkpoint_manager
|
A tf.train.CheckpointManager object.
|
checkpoint_number
|
A tf.Variable to indicate the checkpoint_number for
checkpoint manager, usually it will be the global step.
|
keep_running_after_save
|
Whether to keep the job running after the
preemption on-demand checkpoint. Only set to True when in-process
preemption recovery with tf.distribute.experimental.PreemptionWatcher is
enabled.
|
Methods
__call__
View source
__call__(
_
) -> None
Call self as a function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2025-04-18 UTC.
[null,null,["Last updated 2025-04-18 UTC."],[],[],null,["# orbit.actions.SaveCheckpointIfPreempted\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/models/blob/v2.19.1/orbit/actions/save_checkpoint_if_preempted.py#L22-L62) |\n\nAction that saves on-demand checkpoints after a preemption. \n\n orbit.actions.SaveCheckpointIfPreempted(\n cluster_resolver: tf.distribute.cluster_resolver.ClusterResolver,\n checkpoint_manager: tf.train.CheckpointManager,\n checkpoint_number: Optional[tf.Variable] = None,\n keep_running_after_save: Optional[bool] = False\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `cluster_resolver` | A [`tf.distribute.cluster_resolver.ClusterResolver`](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver) object. |\n| `checkpoint_manager` | A [`tf.train.CheckpointManager`](https://www.tensorflow.org/api_docs/python/tf/train/CheckpointManager) object. |\n| `checkpoint_number` | A [`tf.Variable`](https://www.tensorflow.org/api_docs/python/tf/Variable) to indicate the checkpoint_number for checkpoint manager, usually it will be the global step. |\n| `keep_running_after_save` | Whether to keep the job running after the preemption on-demand checkpoint. Only set to True when in-process preemption recovery with tf.distribute.experimental.PreemptionWatcher is enabled. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/models/blob/v2.19.1/orbit/actions/save_checkpoint_if_preempted.py#L59-L62) \n\n __call__(\n _\n ) -\u003e None\n\nCall self as a function."]]