|TensorFlow 1 version||View source on GitHub|
A deprecated dtype policy for a Keras layer.
tf.keras.mixed_precision.experimental.Policy( name, loss_scale='auto' )
The difference between this class and the non-experimental class is that this
class has a
loss_scale field and the non-experimental class does not. The
loss scale is only used by
tf.keras.Model.compile, which automatically wraps
the optimizer with a
LossScaleOptimizer if the optimzier is not already a
LossScaleOptimizer. For the non-experimental Policy class,
instead wraps the optimizer with a
When deserializing objects with an experimental policy using functions like
tf.keras.utils.deserialize_keras_object, the policy will be deserialized as
tf.keras.mixed_precision.Policy, and the loss scale
will silently be dropped. This is so that SavedModels that are generated
with an expeirmental policy can be restored after the experimental policy is
A string. Can be one of the following values:
The compute dtype of this policy.
This is the dtype layers will do their computations in. Typically layers output tensors with the compute dtype as well.
Note that even if the compute dtype is float16 or bfloat16, hardware devices may not do individual adds, multiplies, and other fundamental operations in float16 or bfloat16, but instead may do some of them in float32 for numeric stability. The compute dtype is the dtype of the inputs and outputs of the TensorFlow ops that the layer executes. Internally, many TensorFlow ops will do certain internal calculations in float32 or some other device-internal intermediate format with higher precision than float16/bfloat16, to increase numeric stability.
For example, a
||Returns the loss scale of this Policy.|
||Returns the name of this policy.|
The variable dtype of this policy.
This is the dtype layers will create their variables in, unless a layer
explicitly chooses a different dtype. If this is different than
Variable regularizers are run in the variable dtype, not the compute dtype.
from_config( config, custom_objects=None )