Computes the mean of the values of a Tensor
over the whole dataset.
tft.mean(
x: common_types.TensorType,
reduce_instance_dims: bool = True,
name: Optional[str] = None,
output_dtype: Optional[tf.DType] = None
) -> tf.Tensor
Used in the notebooks
Args |
x
|
A Tensor , SparseTensor , or RaggedTensor . Its type must be floating
point (float{16|32|64}), or integral ([u]int{8|16|32|64}).
|
reduce_instance_dims
|
By default collapses the batch and instance dimensions
to arrive at a single scalar output. If False, only collapses the batch
dimension and outputs a vector of the same shape as the input.
|
name
|
(Optional) A name for this operation.
|
output_dtype
|
(Optional) If not None, casts the output tensor to this type.
|
Returns |
A Tensor containing the mean. If x is floating point, the mean will have
the same type as x . If x is integral, the output is cast to float32.
NaNs and infinite input values are ignored.
|
Raises |
TypeError
|
If the type of x is not supported.
|