Computes the variance of the values of a Tensor
over the whole dataset.
tft.var(
x: common_types.TensorType,
reduce_instance_dims: bool = True,
name: Optional[str] = None,
output_dtype: Optional[tf.DType] = None
) -> tf.Tensor
Uses the biased variance (0 delta degrees of freedom), as given by
(x - mean(x))**2 / length(x).
Args |
x
|
Tensor , SparseTensor , or RaggedTensor . Its type must be floating
point (float{16|32|64}), or integral ([u]int{8|16|32|64}).
|
reduce_instance_dims
|
By default collapses the batch and instance dimensions
to arrive at a single scalar output. If False, only collapses the batch
dimension and outputs a vector of the same shape as the input.
|
name
|
(Optional) A name for this operation.
|
output_dtype
|
(Optional) If not None, casts the output tensor to this type.
|
Returns |
A Tensor containing the variance. If x is floating point, the variance
will have the same type as x . If x is integral, the output is cast to
float32. NaNs and infinite input values are ignored.
|
Raises |
TypeError
|
If the type of x is not supported.
|