tf.grad_pass_through
Stay organized with collections
Save and categorize content based on your preferences.
Creates a grad-pass-through op with the forward behavior provided in f.
tf.grad_pass_through(
f
)
Use this function to wrap any op, maintaining its behavior in the forward
pass, but replacing the original op in the backward graph with an identity.
For example:
x = tf.Variable(1.0, name="x")
z = tf.Variable(3.0, name="z")
with tf.GradientTape() as tape:
# y will evaluate to 9.0
y = tf.grad_pass_through(x.assign)(z**2)
# grads will evaluate to 6.0
grads = tape.gradient(y, z)
Another example is a 'differentiable' moving average approximation, where
gradients are allowed to flow into the last value fed to the moving average,
but the moving average is still used for the forward pass:
x = ... # Some scalar value
# A moving average object, we don't need to know how this is implemented
moving_average = MovingAverage()
with backprop.GradientTape() as tape:
# mavg_x will evaluate to the current running average value
mavg_x = tf.grad_pass_through(moving_average)(x)
grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0
Args |
f
|
function f(*x) that returns a Tensor or nested structure of Tensor
outputs.
|
Returns |
A function h(x) which returns the same values as f(x) and whose
gradients are the same as those of an identity function.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.grad_pass_through\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/grad_pass_through) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/ops/custom_gradient.py#L408-L458) |\n\nCreates a grad-pass-through op with the forward behavior provided in f.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.grad_pass_through`](/api_docs/python/tf/grad_pass_through)\n\n\u003cbr /\u003e\n\n tf.grad_pass_through(\n f\n )\n\nUse this function to wrap any op, maintaining its behavior in the forward\npass, but replacing the original op in the backward graph with an identity.\nFor example: \n\n x = tf.Variable(1.0, name=\"x\")\n z = tf.Variable(3.0, name=\"z\")\n\n with tf.GradientTape() as tape:\n # y will evaluate to 9.0\n y = tf.grad_pass_through(x.assign)(z**2)\n # grads will evaluate to 6.0\n grads = tape.gradient(y, z)\n\nAnother example is a 'differentiable' moving average approximation, where\ngradients are allowed to flow into the last value fed to the moving average,\nbut the moving average is still used for the forward pass: \n\n x = ... # Some scalar value\n # A moving average object, we don't need to know how this is implemented\n moving_average = MovingAverage()\n with backprop.GradientTape() as tape:\n # mavg_x will evaluate to the current running average value\n mavg_x = tf.grad_pass_through(moving_average)(x)\n grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----|-----------------------------------------------------------------------------------|\n| `f` | function `f(*x)` that returns a `Tensor` or nested structure of `Tensor` outputs. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A function `h(x)` which returns the same values as `f(x)` and whose gradients are the same as those of an identity function. ||\n\n\u003cbr /\u003e"]]