Returns both the total and the correction term, as a namedtuple,
representing the sum in higher precision as total - correction.
A practical use-case is computing the difference of two large (magnitude) sums
we expect to be nearly equal. If instead we take their difference as
(s0.total - s1.total) - (s0.correction - s1.correction), we can retain more
precision in computing their difference.
Note that total holds all the high-order bits of the sum, so the correction
can be safely neglected if further enhanced precision computations are not
required.
Args
input_tensor
The tensor to sum.
axis
One of None, a Python int, or a sequence of Python int. The axes
to be reduced. None is taken as "reduce all axes".
keepdims
Python bool indicating whether we return a tensor with singleton
dimensions in the reduced axes (True), or squeeze the axes out (default,
False).
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.math.reduce_kahan_sum\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/math/generic.py#L182-L218) |\n\nReduces the input tensor along the given axis using Kahan summation. \n\n tfp.math.reduce_kahan_sum(\n input_tensor, axis=None, keepdims=False, name=None\n )\n\nReturns both the total and the correction term, as a `namedtuple`,\nrepresenting the sum in higher precision as `total - correction`.\n\nA practical use-case is computing the difference of two large (magnitude) sums\nwe expect to be nearly equal. If instead we take their difference as\n`(s0.total - s1.total) - (s0.correction - s1.correction)`, we can retain more\nprecision in computing their difference.\n\nNote that `total` holds all the high-order bits of the sum, so the correction\ncan be safely neglected if further enhanced precision computations are not\nrequired.\n| **Note:** (TF + JAX) This function does not work properly on XLA:CPU without the environment variable: `XLA_FLAGS=--xla_cpu_enable_fast_math=false`, due to LLVM's reassociation optimizations, which simplify error terms to zero.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input_tensor` | The tensor to sum. |\n| `axis` | One of `None`, a Python `int`, or a sequence of Python `int`. The axes to be reduced. `None` is taken as \"reduce all axes\". |\n| `keepdims` | Python `bool` indicating whether we return a tensor with singleton dimensions in the reduced axes (`True`), or squeeze the axes out (default, `False`). |\n| `name` | Optional name for ops in scope. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-----------|------------------------------------------|\n| `reduced` | A `Kahan(total, correction)` namedtuple. |\n\n\u003cbr /\u003e"]]