To use this function, you must use ResourceVariables (i.e.
`variable_scope(name, use_resource=True), which are the default in Eager mode
and when running on TPU.
Args
fn
a function that takes Tensors (all as positional arguments) and returns
a tuple of Tensors. Note that fn should not close over any other
Tensors or Variables.
use_data_dep
bool, if True will use a dummy data dependency to force
the recompute to happen. If False will use a control dependency. By
default will be True if in an XLA context and False otherwise. XLA
ignores control dependencies and so this data dependency is necessary.
tupleize_grads
bool, if True will use control dependencies to ensure
that all gradients are produced before any are consumed by downstream ops.
If use_data_dep is also True, will use a data dependency instead of
a control dependency.
Returns
A wrapped fn that is identical to fn when called, but its activations will
be discarded and recomputed on the backwards pass (i.e. on a call to
tf.gradients).
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.layers.recompute_grad\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/layers/python/layers/rev_block_lib.py#L457-L513) |\n\nDecorator that recomputes the function on the backwards pass. \n\n tf.contrib.layers.recompute_grad(\n fn, use_data_dep=_USE_DEFAULT, tupleize_grads=False\n )\n\nTo use this function, you must use `ResourceVariable`s (i.e.\n\\`variable_scope(name, use_resource=True), which are the default in Eager mode\nand when running on TPU.\n| **Warning:** Because the function will be called again on the backwards pass, the user should be careful to not use ops in their function that mutate state or have randomness (for example, batch normalization or dropout). If the function does have such operations, it is recommended that the function take the `is_recomputing` keyword argument which will be `False` on the forward pass and `True` on the backwards pass so that it can disable state changes when `is_recomputing=True` (for example, not updating the moving averages in batch normalization).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `fn` | a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Note that `fn` should not close over any other Tensors or Variables. |\n| `use_data_dep` | `bool`, if `True` will use a dummy data dependency to force the recompute to happen. If `False` will use a control dependency. By default will be `True` if in an XLA context and `False` otherwise. XLA ignores control dependencies and so this data dependency is necessary. |\n| `tupleize_grads` | `bool`, if `True` will use control dependencies to ensure that all gradients are produced before any are consumed by downstream ops. If `use_data_dep` is also `True`, will use a data dependency instead of a control dependency. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pass (i.e. on a call to tf.gradients). ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------------------------------|\n| `ValueError` | if `fn` closes over any Tensors or Variables. |\n\n\u003cbr /\u003e"]]