TensorFlow 1 version View source on GitHub

Computes the maximum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

Usage example:

x = tf.constant([5, 1, 2, 4])
tf.Tensor(5, shape=(), dtype=int32)
x = tf.constant([-5, -1, -2, -4])
tf.Tensor(-1, shape=(), dtype=int32)
x = tf.constant([4, float('nan')])
tf.Tensor(4.0, shape=(), dtype=float32)
x = tf.constant([float('nan'), float('nan')])
tf.Tensor(-inf, shape=(), dtype=float32)
x = tf.constant([float('-inf'), float('inf')])
tf.Tensor(inf, shape=(), dtype=float32)

See the numpy docs for np.amax and np.nanmax behavior.

input_tensor The tensor to reduce. Should have real numeric type.
axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
keepdims If true, retains reduced dimensions with length 1.
name A name for the operation (optional).

The reduced tensor.