tf.contrib.distributions.reduce_weighted_logsumexp
Stay organized with collections
Save and categorize content based on your preferences.
Computes log(abs(sum(weight * exp(elements across tensor dimensions))))
.
tf.contrib.distributions.reduce_weighted_logsumexp(
logx, w=None, axis=None, keep_dims=False, return_sign=False, name=None
)
If all weights w
are known to be positive, it is more efficient to directly
use reduce_logsumexp
, i.e., tf.reduce_logsumexp(logx + tf.math.log(w))
is
more
efficient than du.reduce_weighted_logsumexp(logx, w)
.
Reduces input_tensor
along the dimensions given in axis
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in axis
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
This function is more numerically stable than log(sum(w * exp(input))). It
avoids overflows caused by taking the exp of large inputs and underflows
caused by taking the log of small inputs.
For example:
x = tf.constant([[0., 0, 0],
[0, 0, 0]])
w = tf.constant([[-1., 1, 1],
[1, 1, 1]])
du.reduce_weighted_logsumexp(x, w)
# ==> log(-1*1 + 1*1 + 1*1 + 1*1 + 1*1 + 1*1) = log(4)
du.reduce_weighted_logsumexp(x, w, axis=0)
# ==> [log(-1+1), log(1+1), log(1+1)]
du.reduce_weighted_logsumexp(x, w, axis=1)
# ==> [log(-1+1+1), log(1+1+1)]
du.reduce_weighted_logsumexp(x, w, axis=1, keep_dims=True)
# ==> [[log(-1+1+1)], [log(1+1+1)]]
du.reduce_weighted_logsumexp(x, w, axis=[0, 1])
# ==> log(-1+5)
Args |
logx
|
The tensor to reduce. Should have numeric type.
|
w
|
The weight tensor. Should have numeric type identical to logx .
|
axis
|
The dimensions to reduce. If None (the default), reduces all
dimensions. Must be in the range [-rank(input_tensor),
rank(input_tensor)) .
|
keep_dims
|
If true, retains reduced dimensions with length 1.
|
return_sign
|
If True , returns the sign of the result.
|
name
|
A name for the operation (optional).
|
Returns |
lswe
|
The log(abs(sum(weight * exp(x)))) reduced tensor.
|
sign
|
(Optional) The sign of sum(weight * exp(x)) .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.distributions.reduce_weighted_logsumexp\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/distributions/util.py#L1049-L1141) |\n\nComputes `log(abs(sum(weight * exp(elements across tensor dimensions))))`. \n\n tf.contrib.distributions.reduce_weighted_logsumexp(\n logx, w=None, axis=None, keep_dims=False, return_sign=False, name=None\n )\n\nIf all weights `w` are known to be positive, it is more efficient to directly\nuse `reduce_logsumexp`, i.e., `tf.reduce_logsumexp(logx + tf.math.log(w))` is\nmore\nefficient than `du.reduce_weighted_logsumexp(logx, w)`.\n\nReduces `input_tensor` along the dimensions given in `axis`.\nUnless `keep_dims` is true, the rank of the tensor is reduced by 1 for each\nentry in `axis`. If `keep_dims` is true, the reduced dimensions\nare retained with length 1.\n\nIf `axis` has no entries, all dimensions are reduced, and a\ntensor with a single element is returned.\n\nThis function is more numerically stable than log(sum(w \\* exp(input))). It\navoids overflows caused by taking the exp of large inputs and underflows\ncaused by taking the log of small inputs.\n\n#### For example:\n\n x = tf.constant([[0., 0, 0],\n [0, 0, 0]])\n\n w = tf.constant([[-1., 1, 1],\n [1, 1, 1]])\n\n du.reduce_weighted_logsumexp(x, w)\n # ==\u003e log(-1*1 + 1*1 + 1*1 + 1*1 + 1*1 + 1*1) = log(4)\n\n du.reduce_weighted_logsumexp(x, w, axis=0)\n # ==\u003e [log(-1+1), log(1+1), log(1+1)]\n\n du.reduce_weighted_logsumexp(x, w, axis=1)\n # ==\u003e [log(-1+1+1), log(1+1+1)]\n\n du.reduce_weighted_logsumexp(x, w, axis=1, keep_dims=True)\n # ==\u003e [[log(-1+1+1)], [log(1+1+1)]]\n\n du.reduce_weighted_logsumexp(x, w, axis=[0, 1])\n # ==\u003e log(-1+5)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|----------------------------------------------------------------------------------------------------------------------------------------------|\n| `logx` | The tensor to reduce. Should have numeric type. |\n| `w` | The weight tensor. Should have numeric type identical to `logx`. |\n| `axis` | The dimensions to reduce. If `None` (the default), reduces all dimensions. Must be in the range `[-rank(input_tensor), rank(input_tensor))`. |\n| `keep_dims` | If true, retains reduced dimensions with length 1. |\n| `return_sign` | If `True`, returns the sign of the result. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|--------|------------------------------------------------------|\n| `lswe` | The `log(abs(sum(weight * exp(x))))` reduced tensor. |\n| `sign` | (Optional) The sign of `sum(weight * exp(x))`. |\n\n\u003cbr /\u003e"]]