tf.contrib.layers.recompute_grad

View source on GitHub

Decorator that recomputes the function on the backwards pass.

To use this function, you must use ResourceVariables (i.e. `variable_scope(name, use_resource=True), which are the default in Eager mode and when running on TPU.

fn a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Note that fn should not close over any other Tensors or Variables.
use_data_dep bool, if True will use a dummy data dependency to force the recompute to happen. If False will use a control dependency. By default will be True if in an XLA context and False otherwise. XLA ignores control dependencies and so this data dependency is necessary.
tupleize_grads bool, if True will use control dependencies to ensure that all gradients are produced before any are consumed by downstream ops. If use_data_dep is also True, will use a data dependency instead of a control dependency.

A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pass (i.e. on a call to tf.gradients).

ValueError if fn closes over any Tensors or Variables.