tf.train.experimental.enable_mixed_precision_graph_rewrite

TensorFlow 1 version View source on GitHub

Enable mixed precision in tf.functions via a graph rewrite.

Mixed precision is the use of both float16 and float32 when training a model, and is used to make the model run faster. This function will use mixed precision to speed up the execution time of tf.functions when run on a GPU. It does this by changing the dtype of certain operations in the function's graph from float32 to float16.

This function additionally wraps an Optimizer with a LossScaleOptimizer, which is required to prevent underflow in the float16 tensors during the backwards pass. An optimizer must be passed to this function, which will then be wrapped to use loss scaling.

When this function is used, gradients should only be computed and applied with the returned optimizer through opt.minimize(), and not with a tf.GradientTape. This is because the returned optimizer will apply loss scaling, and tf.GradientTape will not. If you do use a tf.GradientTape, your model may train to a worse quality.

Currently, mixed precision is only enabled on Volta GPUs and above. TPU support is coming soon. CPUs are not supported, as CPUs do not run float16 operations faster than float32 operations.

opt An instance of a tf.keras.optimizers.Optimizer.
loss_scale Either an int/float, the string "dynamic", or an instance of a tf.train.experimental.LossScale. The loss scale to use. It is recommended to keep this as its default value of "dynamic".

A version of opt that will use loss scaling to prevent underflow.