|TensorFlow 1 version||View source on GitHub|
Manages multiple checkpoints by keeping some and deleting unneeded ones.
Compat aliases for migration
See Migration guide for more details.
tf.train.CheckpointManager( checkpoint, directory, max_to_keep, keep_checkpoint_every_n_hours=None, checkpoint_name='ckpt', step_counter=None, checkpoint_interval=None, init_fn=None )
Used in the notebooks
|Used in the guide||Used in the tutorials|
import tensorflow as tf checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) manager = tf.train.CheckpointManager( checkpoint, directory="/tmp/model", max_to_keep=5) status = checkpoint.restore(manager.latest_checkpoint) while True: # train manager.save()
CheckpointManager preserves its own state across instantiations (see the
__init__ documentation for details). Only one should be active in a
particular directory at a time.
The path to a directory in which to write checkpoints. A
special file named "checkpoint" is also written to this directory (in a
human-readable text format) which contains the state of the
An integer, the number of checkpoints to keep. Unless
preserved by |