TensorFlow 2.0 Beta is available Learn more

tf.contrib.layers.recompute_grad

View source on GitHub

Decorator that recomputes the function on the backwards pass.

tf.contrib.layers.recompute_grad(
    *args,
    **kwargs
)

To use this function, you must use ResourceVariables (i.e. `variable_scope(name, use_resource=True), which are the default in Eager mode and when running on TPU.

Args:

  • fn: a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Note that fn should not close over any other Tensors or Variables.
  • use_data_dep: bool, if True will use a dummy data dependency to force the recompute to happen. If False will use a control dependency. By default will be True if in an XLA context and False otherwise. XLA ignores control dependencies and so this data dependency is necessary.
  • tupleize_grads: bool, if True will use control dependencies to ensure that all gradients are produced before any are consumed by downstream ops. If use_data_dep is also True, will use a data dependency instead of a control dependency.

Returns:

A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pass (i.e. on a call to tf.gradients).

Raises:

  • ValueError: if fn closes over any Tensors or Variables.