|View source on GitHub|
Enum defining options for variable handling when saving.
Compat aliases for migration
See Migration guide for more details.
NONE No policy applied: Distributed variables are saved as one variable, with no device attached.
When saving variables, also save their device assignment.
This is useful if one wants to hardcode devices in saved models, but it also
makes them non-portable if soft device placement is disabled (more details
tf.config.set_soft_device_placement). This is currently not
fully supported by
saved_model.load, and is mainly intended to be used
when one will be reading the saved model at a lower API level. In the
example below, the graph saved by the call to
saved_model.save will have
the variable devices correctly specified:
exported = tf.train.Checkpoint() with tf.device('/GPU:0'): exported.x_gpu = tf.Variable(1.0) with tf.device('/CPU:0'): exported.x_cpu = tf.Variable(1.0) tf.saved_model.save(exported, export_dir, options = tf.saved_model.SaveOptions( experimental_variable_policy= tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))
Distributed variables are still saved as one variable under this policy.
Distributed variables will be explicitly expanded into their respective
distributed replicas, and their assigned devices will be saved. This is
useful when one wants to use the model for training in environments where
the original distribution strategy is not available. Checkpoints are
currently incompatible with this option, so it is not implemented in
saved_model.save (only the internal
supports it for now).