|  View source on GitHub | 
Quantizes and sums values securely.
tff.aggregators.secure_quantized_sum(
    client_value, lower_bound, upper_bound
)
Provided client_value can be either a Tensor or a nested structure of
Tensors. If it is a nested structure, lower_bound and upper_bound must be
either both scalars, or both have the same structure as client_value, with
each element being a scalar, representing the bounds to be used for each
corresponding Tensor in client_value.
This method converts each Tensor in provided client_value to appropriate
format and uses the tff.federated_secure_sum_bitwidth operator to realize
the sum.
The dtype of Tensors in provided client_value can be one of [tf.int32,
tf.int64, tf.float32, tf.float64].
If the dtype of client_value is tf.int32 or tf.int64, the summation is
possibly exact, depending on lower_bound and upper_bound: In the case that
upper_bound - lower_bound < 2**32, the summation will be exact. If it is
not, client_value will be quantized to precision of 32 bits, so the worst
case error introduced for the value of each client will be approximately
(upper_bound - lower_bound) / 2**32. Deterministic rounding to nearest value
is used in such cases.
If the dtype of client_value is tf.float32 or tf.float64, the summation
is generally not accurate up to full floating point precision. Instead, the
values are first clipped to the [lower_bound, upper_bound] range. These
values are then uniformly quantized to 32 bit resolution, using deterministic
rounding to round the values to the quantization points. Rounding happens
roughly as follows (implementation is a bit more complex to mitigate numerical
stability issues):
values = tf.round(
    (client_value - lower_bound) * ((2**32 - 1) / (upper_bound - lower_bound))
After summation, the inverse operation if performed, so the return value
is of the same dtype as the input client_value.
In terms of accuracy, it is safe to assume accuracy within 7-8 significant
digits for tf.float32 inputs, and 8-9 significant digits for tf.float64
inputs, where the significant digits refer to precision relative to the range
of the provided bounds. Thus, these bounds should not be set extremely wide.
Accuracy losses arise due to (1) quantization within the given clipping range,
(2) float precision of final outputs (e.g. tf.float32 has 23 bits in its
mantissa), and (3) precision losses that arise in doing math on tf.float32
and tf.float64 inputs.
As a concrete example, if the range is +/- 1000, errors up to 1e-4 per
element should be expected for tf.float32 and up to 1e-5 for tf.float64.
| Args | |
|---|---|
| client_value | A tff.Valueplaced attff.CLIENTS. | 
| lower_bound | The smallest possible value for client_value(inclusive).
Values smaller than this bound will be clipped. Must be either a scalar or
a nested structure of scalars, matching the structure ofclient_value.
Must be either a Python constant or atff.Valueplaced attff.SERVER,
with dtype matching that ofclient_value. | 
| upper_bound | The largest possible value for client_value(inclusive).
Values greater than this bound will be clipped. Must be either a scalar or
a nested structure of scalars, matching the structure ofclient_value.
Must be either a Python constant or atff.Valueplaced attff.SERVER,
with dtype matching that ofclient_value. | 
| Returns | |
|---|---|
| Summed client_valueplaced attff.SERVER, of the same dtype asclient_value. | 
| Raises | |
|---|---|
| TypeError | or its subclassesIf input arguments do not satisfy the type constraints specified above. |