tf.nn.log_poisson_loss
Stay organized with collections
Save and categorize content based on your preferences.
Computes log Poisson loss given log_input
.
tf.nn.log_poisson_loss(
targets, log_input, compute_full_loss=False, name=None
)
Gives the log-likelihood loss between the prediction and the target under the
assumption that the target has a Poisson distribution.
Caveat: By default, this is not the exact loss, but the loss minus a
constant term [log(z!)]. That has no effect for optimization, but
does not play well with relative loss comparisons. To compute an
approximation of the log factorial term, specify
compute_full_loss=True to enable Stirling's Approximation.
For brevity, let c = log(x) = log_input
, z = targets
. The log Poisson
loss is
-log(exp(-x) * (x^z) / z!)
= -log(exp(-x) * (x^z)) + log(z!)
~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
[ Note the second term is the Stirling's Approximation for log(z!).
It is invariant to x and does not affect optimization, though
important for correct relative loss comparisons. It is only
computed when compute_full_loss == True. ]
= x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
= exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
Args |
targets
|
A Tensor of the same type and shape as log_input .
|
log_input
|
A Tensor of type float32 or float64 .
|
compute_full_loss
|
whether to compute the full loss. If false, a constant
term is dropped in favor of more efficient optimization.
|
name
|
A name for the operation (optional).
|
Returns |
A Tensor of the same shape as log_input with the componentwise
logistic losses.
|
Raises |
ValueError
|
If log_input and targets do not have the same shape.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.nn.log_poisson_loss\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/nn/log_poisson_loss) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/ops/nn_impl.py#L47-L110) |\n\nComputes log Poisson loss given `log_input`.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.nn.log_poisson_loss`](/api_docs/python/tf/nn/log_poisson_loss)\n\n\u003cbr /\u003e\n\n tf.nn.log_poisson_loss(\n targets, log_input, compute_full_loss=False, name=None\n )\n\nGives the log-likelihood loss between the prediction and the target under the\nassumption that the target has a Poisson distribution.\nCaveat: By default, this is not the exact loss, but the loss minus a\nconstant term \\[log(z!)\\]. That has no effect for optimization, but\ndoes not play well with relative loss comparisons. To compute an\napproximation of the log factorial term, specify\ncompute_full_loss=True to enable Stirling's Approximation.\n\nFor brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson\nloss is \n\n -log(exp(-x) * (x^z) / z!)\n = -log(exp(-x) * (x^z)) + log(z!)\n ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n [ Note the second term is the Stirling's Approximation for log(z!).\n It is invariant to x and does not affect optimization, though\n important for correct relative loss comparisons. It is only\n computed when compute_full_loss == True. ]\n = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------|-----------------------------------------------------------------------------------------------------------------|\n| `targets` | A `Tensor` of the same type and shape as `log_input`. |\n| `log_input` | A `Tensor` of type `float32` or `float64`. |\n| `compute_full_loss` | whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` of the same shape as `log_input` with the componentwise logistic losses. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------|\n| `ValueError` | If `log_input` and `targets` do not have the same shape. |\n\n\u003cbr /\u003e"]]