View source on GitHub 
A tf.Tensor
represents a multidimensional array of elements.
All elements are of a single known data type.
When writing a TensorFlow program, the main object that is
manipulated and passed around is the tf.Tensor
.
A tf.Tensor
has the following properties:
 a single data type (float32, int32, or string, for example)
 a shape
TensorFlow supports eager execution and graph execution. In eager execution, operations are evaluated immediately. In graph execution, a computational graph is constructed for later evaluation.
TensorFlow defaults to eager execution. In the example below, the matrix multiplication results are calculated immediately.
# Compute some values using a Tensor
c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
e = tf.matmul(c, d)
print(e)
tf.Tensor(
[[1. 3.]
[3. 7.]], shape=(2, 2), dtype=float32)
Note that during eager execution, you may discover your Tensors
are actually
of type EagerTensor
. This is an internal detail, but it does give you
access to a useful function, numpy
:
type(e)
<class '...ops.EagerTensor'>
print(e.numpy())
[[1. 3.]
[3. 7.]]
In TensorFlow, tf.function
s are a common way to define graph execution.
A Tensor's shape (that is, the rank of the Tensor and the size of
each dimension) may not always be fully known. In tf.function
definitions, the shape may only be partially known.
Most operations produce tensors of fullyknown shapes if the shapes of their inputs are also fully known, but in some cases it's only possible to find the shape of a tensor at execution time.
A number of specialized tensors are available: see tf.Variable
,
tf.constant
, tf.placeholder
, tf.sparse.SparseTensor
, and
tf.RaggedTensor
.
a = np.array([1, 2, 3])
b = tf.constant(a)
a[0] = 4
print(b) # tf.Tensor([4 2 3], shape=(3,), dtype=int64)
For more on Tensors, see the guide.
Attributes  

dtype

The DType of elements in this tensor.

name


ndim


shape

Returns a tf.TensorShape that represents the shape of this tensor.
In a A See 
Methods
eval
eval(
feed_dict=None, session=None
)
Evaluates this tensor in a Session
.
Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.
Args  

feed_dict

A dictionary that maps Tensor objects to feed values. See
tf.Session.run for a description of the valid feed values.

session

(Optional.) The Session to be used to evaluate this tensor. If
none, the default session will be used.

Returns  

A numpy array corresponding to the value of this tensor. 
experimental_ref
experimental_ref()
DEPRECATED FUNCTION
get_shape
get_shape() > tf.TensorShape
Returns a tf.TensorShape
that represents the shape of this tensor.
In eager execution the shape is always fullyknown.
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
print(a.shape)
(2, 3)
tf.Tensor.get_shape()
is equivalent to tf.Tensor.shape
.
When executing in a tf.function
or building a model using
tf.keras.Input
, Tensor.shape
may return a partial shape (including
None
for unknown dimensions). See tf.TensorShape
for more details.
inputs = tf.keras.Input(shape = [10])
# Unknown batch size
print(inputs.shape)
(None, 10)
The shape is computed using shape inference functions that are
registered for each tf.Operation
.
The returned tf.TensorShape
is determined at build time, without
executing the underlying kernel. It is not a tf.Tensor
. If you need a
shape tensor, either convert the tf.TensorShape
to a tf.constant
, or
use the tf.shape(tensor)
function, which returns the tensor's shape at
execution time.
This is useful for debugging and providing early errors. For
example, when tracing a tf.function
, no ops are being executed, shapes
may be unknown (See the Concrete Functions
Guide for details).
@tf.function
def my_matmul(a, b):
result = a@b
# the `print` executes during tracing.
print("Result shape: ", result.shape)
return result
The shape inference functions propagate shapes to the extent possible:
f = my_matmul.get_concrete_function(
tf.TensorSpec([None,3]),
tf.TensorSpec([3,5]))
Result shape: (None, 5)
Tracing may fail if a shape missmatch can be detected:
cf = my_matmul.get_concrete_function(
tf.TensorSpec([None,3]),
tf.TensorSpec([4,5]))
Traceback (most recent call last):
ValueError: Dimensions must be equal, but are 3 and 4 for 'matmul' (op:
'MatMul') with input shapes: [?,3], [4,5].
In some cases, the inferred shape may have unknown dimensions. If
the caller has additional information about the values of these
dimensions, tf.ensure_shape
or Tensor.set_shape()
can be used to augment
the inferred shape.
@tf.function
def my_fun(a):
a = tf.ensure_shape(a, [5, 5])
# the `print` executes during tracing.
print("Result shape: ", a.shape)
return a
cf = my_fun.get_concrete_function(
tf.TensorSpec([None, None]))
Result shape: (5, 5)
Returns  

A tf.TensorShape representing the shape of this tensor.

ref
ref()
Returns a hashable reference object to this Tensor.
The primary use case for this API is to put tensors in a set/dictionary.
We can't put tensors in a set/dictionary as tensor.__hash__()
is no longer
available starting Tensorflow 2.0.
The following will raise an exception starting 2.0
x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(10)
tensor_set = {x, y, z}
Traceback (most recent call last):
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
tensor_dict = {x: 'five', y: 'ten'}
Traceback (most recent call last):
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
Instead, we can use tensor.ref()
.
tensor_set = {x.ref(), y.ref(), z.ref()}
x.ref() in tensor_set
True
tensor_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'}
tensor_dict[y.ref()]
'ten'
Also, the reference object provides .deref()
function that returns the
original Tensor.
x = tf.constant(5)
x.ref().deref()
<tf.Tensor: shape=(), dtype=int32, numpy=5>
set_shape
set_shape(
shape
)
Updates the shape of this tensor.
With eager execution this operates as a shape assertion. Here the shapes match:
t = tf.constant([[1,2,3]])
t.set_shape([1, 3])
Passing a None
in the new shape allows any value for that axis:
t.set_shape([1,None])
An error is raised if an incompatible shape is passed.
t.set_shape([1,5])
Traceback (most recent call last):
ValueError: Tensor's shape (1, 3) is not compatible with supplied
shape [1, 5]
When executing in a tf.function
, or building a model using
tf.keras.Input
, Tensor.set_shape
will merge the given shape
with
the current shape of this tensor, and set the tensor's shape to the
merged value (see tf.TensorShape.merge_with
for details):
t = tf.keras.Input(shape=[None, None, 3])
print(t.shape)
(None, None, None, 3)
Dimensions set to None
are not updated:
t.set_shape([None, 224, 224, None])
print(t.shape)
(None, 224, 224, 3)
The main use case for this is to provide additional shape information that cannot be inferred from the graph alone.
For example if you know all the images in a dataset have shape [28,28,3] you
can set it with tf.set_shape
:
@tf.function
def load_image(filename):
raw = tf.io.read_file(filename)
image = tf.image.decode_png(raw, channels=3)
# the `print` executes during tracing.
print("Initial shape: ", image.shape)
image.set_shape([28, 28, 3])
print("Final shape: ", image.shape)
return image
Trace the function, see the Concrete Functions Guide for details.
cf = load_image.get_concrete_function(
tf.TensorSpec([], dtype=tf.string))
Initial shape: (None, None, 3)
Final shape: (28, 28, 3)
Similarly the tf.io.parse_tensor
function could return a tensor with
any shape, even the tf.rank
is unknown. If you know that all your
serialized tensors will be 2d, set it with set_shape
:
@tf.function
def my_parse(string_tensor):
result = tf.io.parse_tensor(string_tensor, out_type=tf.float32)
# the `print` executes during tracing.
print("Initial shape: ", result.shape)
result.set_shape([None, None])
print("Final shape: ", result.shape)
return result
Trace the function
concrete_parse = my_parse.get_concrete_function(
tf.TensorSpec([], dtype=tf.string))
Initial shape: <unknown>
Final shape: (None, None)
Make sure it works:
t = tf.ones([5,3], dtype=tf.float32)
serialized = tf.io.serialize_tensor(t)
print(serialized.dtype)
<dtype: 'string'>
print(serialized.shape)
()
t2 = concrete_parse(serialized)
print(t2.shape)
(5, 3)
# Serialize a rank3 tensor
t = tf.ones([5,5,5], dtype=tf.float32)
serialized = tf.io.serialize_tensor(t)
# The function still runs, even though it `set_shape([None,None])`
t2 = concrete_parse(serialized)
print(t2.shape)
(5, 5, 5)
Args  

shape

A TensorShape representing the shape of this tensor, a
TensorShapeProto , a list, a tuple, or None.

Raises  

ValueError

If shape is not compatible with the current shape of
this tensor.

__abs__
__abs__(
name=None
)
Computes the absolute value of a tensor.
Given a tensor of integer or floatingpoint values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input.
Given a tensor x
of complex numbers, this operation returns a tensor of type
float32
or float64
that is the absolute value of each element in x
. For
a complex number \(a + bj\), its absolute value is computed as
\(\sqrt{a^2 + b^2}\).
For example:
# real number
x = tf.constant([2.25, 3.25])
tf.abs(x)
<tf.Tensor: shape=(2,), dtype=float32,
numpy=array([2.25, 3.25], dtype=float32)>
# complex number
x = tf.constant([[2.25 + 4.75j], [3.25 + 5.75j]])
tf.abs(x)
<tf.Tensor: shape=(2, 1), dtype=float64, numpy=
array([[5.25594901],
[6.60492241]])>
Args  

x

A Tensor or SparseTensor of type float16 , float32 , float64 ,
int32 , int64 , complex64 or complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor or SparseTensor of the same size, type and sparsity as x ,
with absolute values. Note, for complex64 or complex128 input, the
returned Tensor will be of type float32 or float64 , respectively.
If 
__add__
__add__(
y
)
The operation invoked by the Tensor.add
operator.
Purpose in the API  

This method is exposed in TensorFlow's API so that library developers
can register dispatching for Tensor.add to allow it to handle
custom composite tensors & other custom objects.
The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. 
Args  

x

The lefthand side of the + operator.

y

The righthand side of the + operator.

name

an optional name for the operation. 
Returns  

The result of the elementwise + operation.

__and__
__and__(
y
)
__array__
__array__(
dtype=None
)
__bool__
__bool__()
Dummy method to prevent a tensor from being used as a Python bool
.
This overload raises a TypeError
when the user inadvertently
treats a Tensor
as a boolean (most commonly in an if
or while
statement), in code that was not converted by AutoGraph. For example:
if tf.constant(True): # Will raise.
# ...
if tf.constant(5) < tf.constant(7): # Will raise.
# ...
Raises  

TypeError .

__div__
__div__(
y
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
This function divides x
and y
, forcing Python 2 semantics. That is, if x
and y
are both integers then the result will be an integer. This is in
contrast to Python 3, where division with /
is always a float while division
with //
is always an integer.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y returns the quotient of x and y.

Migrate to TF2
This function is deprecated in TF2. Prefer using the Tensor division operator,
tf.divide
, or tf.math.divide
, which obey the Python 3 division operator
semantics.
__eq__
__eq__(
other
)
The operation invoked by the Tensor.eq
operator.
Compares two tensors elementwise for equality if they are
broadcastcompatible; or returns False if they are not broadcastcompatible.
(Note that this behavior differs from tf.math.equal
, which raises an
exception if the two tensors are not broadcastcompatible.)
Purpose in the API  

This method is exposed in TensorFlow's API so that library developers
can register dispatching for Tensor.eq to allow it to handle
custom composite tensors & other custom objects.
The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. 
Args  

self

The lefthand side of the == operator.

other

The righthand side of the == operator.

Returns  

The result of the elementwise == operation, or False if the arguments
are not broadcastcompatible.

__floordiv__
__floordiv__(
y
)
Divides x / y
elementwise, rounding toward the most negative integer.
Mathematically, this is equivalent to floor(x / y). For example: floor(8.4 / 4.0) = floor(2.1) = 2.0 floor(8.4 / 4.0) = floor(2.1) = 3.0 This is equivalent to the '//' operator in Python 3.0 and above.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y rounded toward infinity.

Raises  

TypeError

If the inputs are complex. 
__ge__
__ge__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) > Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x >= y) elementwise.
Example:
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__getitem__
__getitem__(
slice_spec, var=None
)
Overload for Tensor.getitem.
This operation extracts the specified region from the tensor. The notation is similar to NumPy with the restriction that currently only support basic indexing. That means that using a nonscalar tensor as input is not currently allowed.
Some useful examples:
# Strip leading and trailing 2 elements
foo = tf.constant([1,2,3,4,5,6])
print(foo[2:2]) # => [3,4]
# Skip every other row and reverse the order of the columns
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[::2,::1]) # => [[3,2,1], [9,8,7]]
# Use scalar tensors as indices on both dimensions
print(foo[tf.constant(0), tf.constant(2)]) # => 3
# Insert another dimension
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :]) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[:, tf.newaxis, :]) # => [[[1,2,3]], [[4,5,6]], [[7,8,9]]]
print(foo[:, :, tf.newaxis]) # => [[[1],[2],[3]], [[4],[5],[6]],
[[7],[8],[9]]]
# Ellipses (3 equivalent operations)
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :]) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis, ...]) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis]) # => [[[1,2,3], [4,5,6], [7,8,9]]]
# Masks
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[foo > 2]) # => [3, 4, 5, 6, 7, 8, 9]
Notes  


Purpose in the API  

This method is exposed in TensorFlow's API so that library developers
can register dispatching for Tensor.getitem to allow it to handle
custom composite tensors & other custom objects.
The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. 
Args  

tensor

An tensor.Tensor object. 
slice_spec

The arguments to Tensor.getitem. 
var

In the case of variable slice assignment, the Variable object to slice (i.e. tensor is the readonly view of this variable). 
Returns  

The appropriate slice of "tensor", based on "slice_spec". 
Raises  

ValueError

If a slice range is negative size. 
TypeError

If the slice indices aren't int, slice, ellipsis, tf.newaxis or scalar int32/int64 tensors. 
__gt__
__gt__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) > Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x > y) elementwise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__invert__
__invert__(
name=None
)
__iter__
__iter__()
__le__
__le__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) > Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x <= y) elementwise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__len__
__len__()
__lt__
__lt__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) > Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x < y) elementwise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__matmul__
__matmul__(
y
)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.
Both matrices must be of the same type. The supported types are:
bfloat16
, float16
, float32
, float64
, int32
, int64
,
complex64
, complex128
.
Either matrix can be transposed or adjointed (conjugated and transposed) on
the fly by setting one of the corresponding flag to True
. These are False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
This optimization is only available for plain matrices (rank2 tensors) with
datatypes bfloat16
or float32
.
A simple 2D tensor matrix multiplication:
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a # 2D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b # 2D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7, 8],
[ 9, 10],
[11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58, 64],
[139, 154]], dtype=int32)>
A batch matrix multiplication with batch shape [2]:
a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a # 3D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b # 3D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
[229, 244]],
[[508, 532],
[697, 730]]], dtype=int32)>
Since python >= 3.5 the @ operator is supported
(see PEP 465). In TensorFlow,
it simply calls the tf.matmul()
function, so the following lines are
equivalent:
d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
Args  

a

tf.Tensor of type float16 , float32 , float64 , int32 ,
complex64 , complex128 and rank > 1.

b

tf.Tensor with same type and rank as a .

transpose_a

If True , a is transposed before multiplication.

transpose_b

If True , b is transposed before multiplication.

adjoint_a

If True , a is conjugated and transposed before
multiplication.

adjoint_b

If True , b is conjugated and transposed before
multiplication.

a_is_sparse

If True , a is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in a are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

b_is_sparse

If True , b is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in b are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

output_type

The output datatype if needed. Defaults to None in which case the output_type is the same as input type. Currently only works when input tensors are type (u)int8 and output_type can be int32. 
name

Name for the operation (optional). 
Returns  

A tf.Tensor of the same type as a and b where each innermost matrix
is the product of the corresponding matrices in a and b , e.g. if all
transpose or adjoint attributes are False :


Note

This is matrix product, not elementwise product. 
Raises  

ValueError

If transpose_a and adjoint_a , or transpose_b and
adjoint_b are both set to True .

TypeError

If output_type is specified but the types of a , b and
output_type is not (u)int8, (u)int8 and int32.

__mod__
__mod__(
y
)
Returns elementwise remainder of division.
This follows Python semantics in that the
result here is consistent with a flooring divide. E.g.
floor(x / y) * y + floormod(x, y) = x
, regardless of the signs of x and y.
Args  

x

A Tensor . Must be one of the following types: int8 , int16 , int32 ,
int64 , uint8 , uint16 , uint32 , uint64 , bfloat16 , half ,
float32 , float64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__mul__
__mul__(
y
)
Dispatches cwise mul for "DenseDense" and "DenseSparse".
__ne__
__ne__(
other
)
The operation invoked by the Tensor.ne
operator.
Compares two tensors elementwise for inequality if they are
broadcastcompatible; or returns True if they are not broadcastcompatible.
(Note that this behavior differs from tf.math.not_equal
, which raises an
exception if the two tensors are not broadcastcompatible.)
Purpose in the API  

This method is exposed in TensorFlow's API so that library developers
can register dispatching for Tensor.ne to allow it to handle
custom composite tensors & other custom objects.
The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. 
Args  

self

The lefthand side of the != operator.

other

The righthand side of the != operator.

Returns  

The result of the elementwise != operation, or True if the arguments
are not broadcastcompatible.

__neg__
__neg__(
name=None
) > Annotated[Any, tf.raw_ops.Any
]
Computes numerical negative value elementwise.
I.e., \(y = x\).
Args  

x

A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , int8 , int16 , int32 , int64 , complex64 , complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .
If 
__nonzero__
__nonzero__()
Dummy method to prevent a tensor from being used as a Python bool
.
This is the Python 2.x counterpart to __bool__()
above.
Raises  

TypeError .

__or__
__or__(
y
)
__pow__
__pow__(
y
)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
Args  

x

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

y

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor .

__radd__
__radd__(
x
)
The operation invoked by the Tensor.add
operator.
Purpose in the API  

This method is exposed in TensorFlow's API so that library developers
can register dispatching for Tensor.add to allow it to handle
custom composite tensors & other custom objects.
The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. 
Args  

x

The lefthand side of the + operator.

y

The righthand side of the + operator.

name

an optional name for the operation. 
Returns  

The result of the elementwise + operation.

__rand__
__rand__(
x
)
__rdiv__
__rdiv__(
x
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
This function divides x
and y
, forcing Python 2 semantics. That is, if x
and y
are both integers then the result will be an integer. This is in
contrast to Python 3, where division with /
is always a float while division
with //
is always an integer.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y returns the quotient of x and y.

Migrate to TF2
This function is deprecated in TF2. Prefer using the Tensor division operator,
tf.divide
, or tf.math.divide
, which obey the Python 3 division operator
semantics.
__rfloordiv__
__rfloordiv__(
x
)
Divides x / y
elementwise, rounding toward the most negative integer.
Mathematically, this is equivalent to floor(x / y). For example: floor(8.4 / 4.0) = floor(2.1) = 2.0 floor(8.4 / 4.0) = floor(2.1) = 3.0 This is equivalent to the '//' operator in Python 3.0 and above.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y rounded toward infinity.

Raises  

TypeError

If the inputs are complex. 
__rmatmul__
__rmatmul__(
x
)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.
Both matrices must be of the same type. The supported types are:
bfloat16
, float16
, float32
, float64
, int32
, int64
,
complex64
, complex128
.
Either matrix can be transposed or adjointed (conjugated and transposed) on
the fly by setting one of the corresponding flag to True
. These are False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
This optimization is only available for plain matrices (rank2 tensors) with
datatypes bfloat16
or float32
.
A simple 2D tensor matrix multiplication:
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a # 2D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b # 2D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7, 8],
[ 9, 10],
[11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58, 64],
[139, 154]], dtype=int32)>
A batch matrix multiplication with batch shape [2]:
a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a # 3D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b # 3D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
[229, 244]],
[[508, 532],
[697, 730]]], dtype=int32)>
Since python >= 3.5 the @ operator is supported
(see PEP 465). In TensorFlow,
it simply calls the tf.matmul()
function, so the following lines are
equivalent:
d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
Args  

a

tf.Tensor of type float16 , float32 , float64 , int32 ,
complex64 , complex128 and rank > 1.

b

tf.Tensor with same type and rank as a .

transpose_a

If True , a is transposed before multiplication.

transpose_b

If True , b is transposed before multiplication.

adjoint_a

If True , a is conjugated and transposed before
multiplication.

adjoint_b

If True , b is conjugated and transposed before
multiplication.

a_is_sparse

If True , a is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in a are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

b_is_sparse

If True , b is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in b are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

output_type

The output datatype if needed. Defaults to None in which case the output_type is the same as input type. Currently only works when input tensors are type (u)int8 and output_type can be int32. 
name

Name for the operation (optional). 
Returns  

A tf.Tensor of the same type as a and b where each innermost matrix
is the product of the corresponding matrices in a and b , e.g. if all
transpose or adjoint attributes are False :


Note

This is matrix product, not elementwise product. 
Raises  

ValueError

If transpose_a and adjoint_a , or transpose_b and
adjoint_b are both set to True .

TypeError

If output_type is specified but the types of a , b and
output_type is not (u)int8, (u)int8 and int32.

__rmod__
__rmod__(
x
)
Returns elementwise remainder of division.
This follows Python semantics in that the
result here is consistent with a flooring divide. E.g.
floor(x / y) * y + floormod(x, y) = x
, regardless of the signs of x and y.
Args  

x

A Tensor . Must be one of the following types: int8 , int16 , int32 ,
int64 , uint8 , uint16 , uint32 , uint64 , bfloat16 , half ,
float32 , float64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__rmul__
__rmul__(
x
)
Dispatches cwise mul for "DenseDense" and "DenseSparse".
__ror__
__ror__(
x
)
__rpow__
__rpow__(
x
)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
Args  

x

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

y

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor .

__rsub__
__rsub__(
x
)
Returns x  y elementwise.
Both input and output have a range (inf, inf)
.
Example usages below.
Subtract operation between an array and a scalar:
x = [1, 2, 3, 4, 5]
y = 1
tf.subtract(x, y)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 0, 1, 2, 3, 4], dtype=int32)>
Note that binary 
operator can be used instead:
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x  y
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
Subtract operation between an array and a tensor of same shape:
x = [1, 2, 3, 4, 5]
y = tf.constant([5, 4, 3, 2, 1])
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 4, 2, 0, 2, 4], dtype=int32)>
For example,
x = tf.constant([1, 2], dtype=tf.int8)
y = [2**8 + 1, 2**8 + 2]
tf.subtract(x, y)
<tf.Tensor: shape=(2,), dtype=int8, numpy=array([0, 0], dtype=int8)>
When subtracting two input values of different shapes, tf.subtract
follows the
general broadcasting rules
. The two input array shapes are compared elementwise. Starting with the
trailing dimensions, the two dimensions either have to be equal or one of them
needs to be 1
.
For example,
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(2, 1, 3)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 3), dtype=float64, numpy=
array([[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]])>
Example with inputs of different dimensions:
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(1, 6)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 6), dtype=float64, numpy=
array([[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]]])>
Args  

x

A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__rtruediv__
__rtruediv__(
x
)
Divides x / y elementwise (using Python 3 division operator semantics).
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y
division in Python 3 and in Python 2.7 with
from __future__ import division
. If you want integer division that rounds
down, use x // y
or tf.math.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args  

x

Tensor numerator of numeric type.

y

Tensor denominator of numeric type.

name

A name for the operation (optional). 
Returns  

x / y evaluated in floating point.

Raises  

TypeError

If x and y have different dtypes.

__rxor__
__rxor__(
x
)
__sub__
__sub__(
y
)
Returns x  y elementwise.
Both input and output have a range (inf, inf)
.
Example usages below.
Subtract operation between an array and a scalar:
x = [1, 2, 3, 4, 5]
y = 1
tf.subtract(x, y)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 0, 1, 2, 3, 4], dtype=int32)>
Note that binary 
operator can be used instead:
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x  y
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
Subtract operation between an array and a tensor of same shape:
x = [1, 2, 3, 4, 5]
y = tf.constant([5, 4, 3, 2, 1])
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 4, 2, 0, 2, 4], dtype=int32)>
For example,
x = tf.constant([1, 2], dtype=tf.int8)
y = [2**8 + 1, 2**8 + 2]
tf.subtract(x, y)
<tf.Tensor: shape=(2,), dtype=int8, numpy=array([0, 0], dtype=int8)>
When subtracting two input values of different shapes, tf.subtract
follows the
general broadcasting rules
. The two input array shapes are compared elementwise. Starting with the
trailing dimensions, the two dimensions either have to be equal or one of them
needs to be 1
.
For example,
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(2, 1, 3)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 3), dtype=float64, numpy=
array([[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]])>
Example with inputs of different dimensions:
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(1, 6)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 6), dtype=float64, numpy=
array([[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]]])>
Args  

x

A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__truediv__
__truediv__(
y
)
Divides x / y elementwise (using Python 3 division operator semantics).
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y
division in Python 3 and in Python 2.7 with
from __future__ import division
. If you want integer division that rounds
down, use x // y
or tf.math.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args  

x

Tensor numerator of numeric type.

y

Tensor denominator of numeric type.

name

A name for the operation (optional). 
Returns  

x / y evaluated in floating point.

Raises  

TypeError

If x and y have different dtypes.

__xor__
__xor__(
y
)