View source on GitHub |
Represents a ragged tensor.
Used in the notebooks
Used in the guide |
---|
A RaggedTensor
is a tensor with one or more ragged dimensions, which are
dimensions whose slices may have different lengths. For example, the inner
(column) dimension of rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []]
is ragged,
since the column slices (rt[0, :]
, ..., rt[4, :]
) have different lengths.
Dimensions whose slices all have the same length are called uniform
dimensions. The outermost dimension of a RaggedTensor
is always uniform,
since it consists of a single slice (and so there is no possibility for
differing slice lengths).
The total number of dimensions in a RaggedTensor
is called its rank,
and the number of ragged dimensions in a RaggedTensor
is called its
ragged-rank. A RaggedTensor
's ragged-rank is fixed at graph creation
time: it can't depend on the runtime values of Tensor
s, and can't vary
dynamically for different session runs.
Note that the __init__
constructor is private. Please use one of the
following methods to construct a RaggedTensor
:
tf.RaggedTensor.from_row_lengths
tf.RaggedTensor.from_value_rowids
tf.RaggedTensor.from_row_splits
tf.RaggedTensor.from_row_starts
tf.RaggedTensor.from_row_limits
tf.RaggedTensor.from_nested_row_splits
tf.RaggedTensor.from_nested_row_lengths
tf.RaggedTensor.from_nested_value_rowids
Potentially Ragged Tensors
Many ops support both Tensor
s and RaggedTensor
s
(see tf.ragged for a
full listing). The term "potentially ragged tensor" may be used to refer to a
tensor that might be either a Tensor
or a RaggedTensor
. The ragged-rank
of a Tensor
is zero.
Documenting RaggedTensor Shapes
When documenting the shape of a RaggedTensor, ragged dimensions can be
indicated by enclosing them in parentheses. For example, the shape of
a 3-D RaggedTensor
that stores the fixed-size word embedding for each
word in a sentence, for each sentence in a batch, could be written as
[num_sentences, (num_words), embedding_size]
. The parentheses around
(num_words)
indicate that dimension is ragged, and that the length
of each element list in that dimension may vary for each item.
Component Tensors
Internally, a RaggedTensor
consists of a concatenated list of values that
are partitioned into variable-length rows. In particular, each RaggedTensor
consists of:
A
values
tensor, which concatenates the variable-length rows into a flattened list. For example, thevalues
tensor for[[3, 1, 4, 1], [], [5, 9, 2], [6], []]
is[3, 1, 4, 1, 5, 9, 2, 6]
.A
row_splits
vector, which indicates how those flattened values are divided into rows. In particular, the values for rowrt[i]
are stored in the slicert.values[rt.row_splits[i]:rt.row_splits[i+1]]
.
Example:
print(tf.RaggedTensor.from_row_splits(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_splits=[0, 4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
Alternative Row-Partitioning Schemes
In addition to row_splits
, ragged tensors provide support for five other
row-partitioning schemes:
row_lengths
: a vector with shape[nrows]
, which specifies the length of each row.value_rowids
andnrows
:value_rowids
is a vector with shape[nvals]
, corresponding one-to-one withvalues
, which specifies each value's row index. In particular, the rowrt[row]
consists of the valuesrt.values[j]
wherevalue_rowids[j]==row
.nrows
is an integer scalar that specifies the number of rows in theRaggedTensor
. (nrows
is used to indicate trailing empty rows.)row_starts
: a vector with shape[nrows]
, which specifies the start offset of each row. Equivalent torow_splits[:-1]
.row_limits
: a vector with shape[nrows]
, which specifies the stop offset of each row. Equivalent torow_splits[1:]
.uniform_row_length
: A scalar tensor, specifying the length of every row. This row-partitioning scheme may only be used if all rows have the same length.
Example: The following ragged tensors are equivalent, and all represent the
nested list [[3, 1, 4, 1], [], [5, 9, 2], [6], []]
.
values = [3, 1, 4, 1, 5, 9, 2, 6]
RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8])
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0])
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
RaggedTensor.from_value_rowids(
values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5)
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8])
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8])
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
RaggedTensor.from_uniform_row_length(values, uniform_row_length=2)
<tf.RaggedTensor [[3, 1], [4, 1], [5, 9], [2, 6]]>
Multiple Ragged Dimensions
RaggedTensor
s with multiple ragged dimensions can be defined by using
a nested RaggedTensor
for the values
tensor. Each nested RaggedTensor
adds a single ragged dimension.
inner_rt = RaggedTensor.from_row_splits( # =rt1 from above
values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8])
outer_rt = RaggedTensor.from_row_splits(
values=inner_rt, row_splits=[0, 3, 3, 5])
print(outer_rt.to_list())
[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]
print(outer_rt.ragged_rank)
2
The factory function RaggedTensor.from_nested_row_splits
may be used to
construct a RaggedTensor
with multiple ragged dimensions directly, by
providing a list of row_splits
tensors:
RaggedTensor.from_nested_row_splits(
flat_values=[3, 1, 4, 1, 5, 9, 2, 6],
nested_row_splits=([0, 3, 3, 5], [0, 4, 4, 7, 8, 8])).to_list()
[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]
Uniform Inner Dimensions
RaggedTensor
s with uniform inner dimensions can be defined
by using a multidimensional Tensor
for values
.
rt = RaggedTensor.from_row_splits(values=tf.ones([5, 3], tf.int32),
row_splits=[0, 2, 5])
print(rt.to_list())
[[[1, 1, 1], [1, 1, 1]],
[[1, 1, 1], [1, 1, 1], [1, 1, 1]]]
print(rt.shape)
(2, None, 3)
Uniform Outer Dimensions
RaggedTensor
s with uniform outer dimensions can be defined by using
one or more RaggedTensor
with a uniform_row_length
row-partitioning
tensor. For example, a RaggedTensor
with shape [2, 2, None]
can be
constructed with this method from a RaggedTensor
values with shape
[4, None]
:
values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
print(values.shape)
(4, None)
rt6 = tf.RaggedTensor.from_uniform_row_length(values, 2)
print(rt6)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
print(rt6.shape)
(2, 2, None)
Note that rt6
only contains one ragged dimension (the innermost
dimension). In contrast, if from_row_splits
is used to construct a similar
RaggedTensor
, then that RaggedTensor
will have two ragged dimensions:
rt7 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])
print(rt7.shape)
(2, None, None)
Uniform and ragged outer dimensions may be interleaved, meaning that a
tensor with any combination of ragged and uniform dimensions may be created.
For example, a RaggedTensor t4
with shape [3, None, 4, 8, None, 2]
could
be constructed as follows:
t0 = tf.zeros([1000, 2]) # Shape: [1000, 2]
t1 = RaggedTensor.from_row_lengths(t0, [...]) # [160, None, 2]
t2 = RaggedTensor.from_uniform_row_length(t1, 8) # [20, 8, None, 2]
t3 = RaggedTensor.from_uniform_row_length(t2, 4) # [5, 4, 8, None, 2]
t4 = RaggedTensor.from_row_lengths(t3, [...]) # [3, None, 4, 8, None, 2]
Methods
bounding_shape
bounding_shape(
axis=None, name=None, out_type=None
)
Returns the tight bounding box shape for this RaggedTensor
.
Args | |
---|---|
axis
|
An integer scalar or vector indicating which axes to return the bounding box for. If not specified, then the full bounding box is returned. |
name
|
A name prefix for the returned tensor (optional). |
out_type
|
dtype for the returned tensor. Defaults to
self.row_splits.dtype .
|
Returns | |
---|---|
An integer Tensor (dtype=self.row_splits.dtype ). If axis is not
specified, then output is a vector with
output.shape=[self.shape.ndims] . If axis is a scalar, then the
output is a scalar. If axis is a vector, then output is a vector,
where output[i] is the bounding size for dimension axis[i] .
|
Example:
rt = tf.ragged.constant([[1, 2, 3, 4], [5], [], [6, 7, 8, 9], [10]])
rt.bounding_shape().numpy()
array([5, 4])
consumers
consumers()
from_nested_row_lengths
@classmethod
from_nested_row_lengths( flat_values, nested_row_lengths, name=None, validate=True )
Creates a RaggedTensor
from a nested list of row_lengths
tensors.
Equivalent to:
result = flat_values
for row_lengths in reversed(nested_row_lengths):
result = from_row_lengths(result, row_lengths)
Args | |
---|---|
flat_values
|
A potentially ragged tensor. |
nested_row_lengths
|
A list of 1-D integer tensors. The i th tensor is
used as the row_lengths for the i th ragged dimension.
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor (or flat_values if nested_row_lengths is empty).
|
from_nested_row_splits
@classmethod
from_nested_row_splits( flat_values, nested_row_splits, name=None, validate=True )
Creates a RaggedTensor
from a nested list of row_splits
tensors.
Equivalent to:
result = flat_values
for row_splits in reversed(nested_row_splits):
result = from_row_splits(result, row_splits)
Args | |
---|---|
flat_values
|
A potentially ragged tensor. |
nested_row_splits
|
A list of 1-D integer tensors. The i th tensor is
used as the row_splits for the i th ragged dimension.
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor (or flat_values if nested_row_splits is empty).
|
from_nested_value_rowids
@classmethod
from_nested_value_rowids( flat_values, nested_value_rowids, nested_nrows=None, name=None, validate=True )
Creates a RaggedTensor
from a nested list of value_rowids
tensors.
Equivalent to:
result = flat_values
for (rowids, nrows) in reversed(zip(nested_value_rowids, nested_nrows)):
result = from_value_rowids(result, rowids, nrows)
Args | |
---|---|
flat_values
|
A potentially ragged tensor. |
nested_value_rowids
|
A list of 1-D integer tensors. The i th tensor is
used as the value_rowids for the i th ragged dimension.
|
nested_nrows
|
A list of integer scalars. The i th scalar is used as the
nrows for the i th ragged dimension.
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor (or flat_values if nested_value_rowids is empty).
|
Raises | |
---|---|
ValueError
|
If len(nested_values_rowids) != len(nested_nrows) .
|
from_row_lengths
@classmethod
from_row_lengths( values, row_lengths, name=None, validate=True )
Creates a RaggedTensor
with rows partitioned by row_lengths
.
The returned RaggedTensor
corresponds with the python list defined by:
result = [[values.pop(0) for i in range(length)]
for length in row_lengths]
Args | |
---|---|
values
|
A potentially ragged tensor with shape [nvals, ...] .
|
row_lengths
|
A 1-D integer tensor with shape [nrows] . Must be
nonnegative. sum(row_lengths) must be nvals .
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor . result.rank = values.rank + 1 .
result.ragged_rank = values.ragged_rank + 1 .
|
Example:
print(tf.RaggedTensor.from_row_lengths(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_lengths=[4, 0, 3, 1, 0]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_row_limits
@classmethod
from_row_limits( values, row_limits, name=None, validate=True )
Creates a RaggedTensor
with rows partitioned by row_limits
.
Equivalent to: from_row_splits(values, concat([0, row_limits]))
.
Args | |
---|---|
values
|
A potentially ragged tensor with shape [nvals, ...] .
|
row_limits
|
A 1-D integer tensor with shape [nrows] . Must be sorted in
ascending order. If nrows>0 , then row_limits[-1] must be nvals .
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor . result.rank = values.rank + 1 .
result.ragged_rank = values.ragged_rank + 1 .
|
Example:
print(tf.RaggedTensor.from_row_limits(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_limits=[4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_row_splits
@classmethod
from_row_splits( values, row_splits, name=None, validate=True )
Creates a RaggedTensor
with rows partitioned by row_splits
.
The returned RaggedTensor
corresponds with the python list defined by:
result = [values[row_splits[i]:row_splits[i + 1]]
for i in range(len(row_splits) - 1)]
Args | |
---|---|
values
|
A potentially ragged tensor with shape [nvals, ...] .
|
row_splits
|
A 1-D integer tensor with shape [nrows+1] . Must not be
empty, and must be sorted in ascending order. row_splits[0] must be
zero and row_splits[-1] must be nvals .
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor . result.rank = values.rank + 1 .
result.ragged_rank = values.ragged_rank + 1 .
|
Raises | |
---|---|
ValueError
|
If row_splits is an empty list.
|
Example:
print(tf.RaggedTensor.from_row_splits(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_splits=[0, 4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_row_starts
@classmethod
from_row_starts( values, row_starts, name=None, validate=True )
Creates a RaggedTensor
with rows partitioned by row_starts
.
Equivalent to: from_row_splits(values, concat([row_starts, nvals]))
.
Args | |
---|---|
values
|
A potentially ragged tensor with shape [nvals, ...] .
|
row_starts
|
A 1-D integer tensor with shape [nrows] . Must be
nonnegative and sorted in ascending order. If nrows>0 , then
row_starts[0] must be zero.
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor . result.rank = values.rank + 1 .
result.ragged_rank = values.ragged_rank + 1 .
|
Example:
print(tf.RaggedTensor.from_row_starts(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_starts=[0, 4, 4, 7, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
from_sparse
@classmethod
from_sparse( st_input, name=None, row_splits_dtype=
tf.dtypes.int64
)
Converts a 2D tf.sparse.SparseTensor
to a RaggedTensor
.
Each row of the output
RaggedTensor
will contain the explicit values
from the same row in st_input
. st_input
must be ragged-right. If not
it is not ragged-right, then an error will be generated.
Example:
indices = [[0, 0], [0, 1], [0, 2], [1, 0], [3, 0]]
st = tf.sparse.SparseTensor(indices=indices,
values=[1, 2, 3, 4, 5],
dense_shape=[4, 3])
tf.RaggedTensor.from_sparse(st).to_list()
[[1, 2, 3], [4], [], [5]]
Currently, only two-dimensional SparseTensors
are supported.
Args | |
---|---|
st_input
|
The sparse tensor to convert. Must have rank 2. |
name
|
A name prefix for the returned tensors (optional). |
row_splits_dtype
|
dtype for the returned RaggedTensor 's row_splits
tensor. One of tf.int32 or tf.int64 .
|
Returns | |
---|---|
A RaggedTensor with the same values as st_input .
output.ragged_rank = rank(st_input) - 1 .
output.shape = [st_input.dense_shape[0], None] .
|
Raises | |
---|---|
ValueError
|
If the number of dimensions in st_input is not known
statically, or is not two.
|
from_tensor
@classmethod
from_tensor( tensor, lengths=None, padding=None, ragged_rank=1, name=None, row_splits_dtype=
tf.dtypes.int64
)
Converts a tf.Tensor
into a RaggedTensor
.
The set of absent/default values may be specified using a vector of lengths
or a padding value (but not both). If lengths
is specified, then the
output tensor will satisfy output[row] = tensor[row][:lengths[row]]
. If
'lengths' is a list of lists or tuple of lists, those lists will be used
as nested row lengths. If padding
is specified, then any row suffix
consisting entirely of padding
will be excluded from the returned
RaggedTensor
. If neither lengths
nor padding
is specified, then the
returned RaggedTensor
will have no absent/default values.
Examples:
dt = tf.constant([[5, 7, 0], [0, 3, 0], [6, 0, 0]])
tf.RaggedTensor.from_tensor(dt)
<tf.RaggedTensor [[5, 7, 0], [0, 3, 0], [6, 0, 0]]>
tf.RaggedTensor.from_tensor(dt, lengths=[1, 0, 3])
<tf.RaggedTensor [[5], [], [6, 0, 0]]>
tf.RaggedTensor.from_tensor(dt, padding=0)
<tf.RaggedTensor [[5, 7], [0, 3], [6]]>
dt = tf.constant([[[5, 0], [7, 0], [0, 0]],
[[0, 0], [3, 0], [0, 0]],
[[6, 0], [0, 0], [0, 0]]])
tf.RaggedTensor.from_tensor(dt, lengths=([2, 0, 3], [1, 1, 2, 0, 1]))
<tf.RaggedTensor [[[5], [7]], [], [[6, 0], [], [0]]]>
Args | |
---|---|
tensor
|
The Tensor to convert. Must have rank ragged_rank + 1 or
higher.
|
lengths
|
An optional set of row lengths, specified using a 1-D integer
Tensor whose length is equal to tensor.shape[0] (the number of rows
in tensor ). If specified, then output[row] will contain
tensor[row][:lengths[row]] . Negative lengths are treated as zero. You
may optionally pass a list or tuple of lengths to this argument, which
will be used as nested row lengths to construct a ragged tensor with
multiple ragged dimensions.
|
padding
|
An optional padding value. If specified, then any row suffix
consisting entirely of padding will be excluded from the returned
RaggedTensor. padding is a Tensor with the same dtype as tensor
and with shape=tensor.shape[ragged_rank + 1:] .
|
ragged_rank
|
Integer specifying the ragged rank for the returned
RaggedTensor . Must be greater than zero.
|
name
|
A name prefix for the returned tensors (optional). |
row_splits_dtype
|
dtype for the returned RaggedTensor 's row_splits
tensor. One of tf.int32 or tf.int64 .
|
Returns | |
---|---|
A RaggedTensor with the specified ragged_rank . The shape of the
returned ragged tensor is compatible with the shape of tensor .
|
Raises | |
---|---|
ValueError
|
If both lengths and padding are specified.
|
ValueError
|
If the rank of tensor is 0 or 1.
|
from_uniform_row_length
@classmethod
from_uniform_row_length( values, uniform_row_length, nrows=None, validate=True, name=None )
Creates a RaggedTensor
with rows partitioned by uniform_row_length
.
This method can be used to create RaggedTensor
s with multiple uniform
outer dimensions. For example, a RaggedTensor
with shape [2, 2, None]
can be constructed with this method from a RaggedTensor
values with shape
[4, None]
:
values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
print(values.shape)
(4, None)
rt1 = tf.RaggedTensor.from_uniform_row_length(values, 2)
print(rt1)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
print(rt1.shape)
(2, 2, None)
Note that rt1
only contains one ragged dimension (the innermost
dimension). In contrast, if from_row_splits
is used to construct a similar
RaggedTensor
, then that RaggedTensor
will have two ragged dimensions:
rt2 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])
print(rt2.shape)
(2, None, None)
Args | |
---|---|
values
|
A potentially ragged tensor with shape [nvals, ...] .
|
uniform_row_length
|
A scalar integer tensor. Must be nonnegative. The
size of the outer axis of values must be evenly divisible by
uniform_row_length .
|
nrows
|
The number of rows in the constructed RaggedTensor. If not
specified, then it defaults to nvals/uniform_row_length (or 0 if
uniform_row_length==0 ). nrows only needs to be specified if
uniform_row_length might be zero. uniform_row_length*nrows must be
nvals .
|
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
name
|
A name prefix for the RaggedTensor (optional). |
Returns | |
---|---|
A RaggedTensor that corresponds with the python list defined by:
|
from_value_rowids
@classmethod
from_value_rowids( values, value_rowids, nrows=None, name=None, validate=True )
Creates a RaggedTensor
with rows partitioned by value_rowids
.
The returned RaggedTensor
corresponds with the python list defined by:
result = [[values[i] for i in range(len(values)) if value_rowids[i] == row]
for row in range(nrows)]
Args | |
---|---|
values
|
A potentially ragged tensor with shape [nvals, ...] .
|
value_rowids
|
A 1-D integer tensor with shape [nvals] , which corresponds
one-to-one with values , and specifies each value's row index. Must be
nonnegative, and must be sorted in ascending order.
|
nrows
|
An integer scalar specifying the number of rows. This should be
specified if the RaggedTensor may containing empty training rows. Must
be greater than value_rowids[-1] (or zero if value_rowids is empty).
Defaults to value_rowids[-1] + 1 (or zero if value_rowids is empty).
|
name
|
A name prefix for the RaggedTensor (optional). |
validate
|
If true, then use assertions to check that the arguments form
a valid RaggedTensor . Note: these assertions incur a runtime cost,
since they must be checked for each tensor value.
|
Returns | |
---|---|
A RaggedTensor . result.rank = values.rank + 1 .
result.ragged_rank = values.ragged_rank + 1 .
|
Raises | |
---|---|
ValueError
|
If nrows is incompatible with value_rowids .
|
Example:
print(tf.RaggedTensor.from_value_rowids(
values=[3, 1, 4, 1, 5, 9, 2, 6],
value_rowids=[0, 0, 0, 0, 2, 2, 2, 3],
nrows=5))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
get_shape
get_shape() -> tf.TensorShape
The statically known shape of this ragged tensor.
Returns | |
---|---|
A TensorShape containing the statically known shape of this ragged
tensor. Ragged dimensions have a size of None .
|
Alias for shape
property.
Examples:
tf.ragged.constant([[0], [1, 2]]).get_shape()
TensorShape([2, None])
tf.ragged.constant(
[[[0, 1]], [[1, 2], [3, 4]]], ragged_rank=1).get_shape()
TensorShape([2, None, 2])
merge_dims
merge_dims(
outer_axis, inner_axis
)
Merges outer_axis...inner_axis into a single dimension.
Returns a copy of this RaggedTensor with the specified range of dimensions flattened into a single dimension, with elements in row-major order.
Examples:
rt = tf.ragged.constant([[[1, 2], [3]], [[4, 5, 6]]])
print(rt.merge_dims(0, 1))
<tf.RaggedTensor [[1, 2], [3], [4, 5, 6]]>
print(rt.merge_dims(1, 2))
<tf.RaggedTensor [[1, 2, 3], [4, 5, 6]]>
print(rt.merge_dims(0, 2))
tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32)
To mimic the behavior of np.flatten
(which flattens all dimensions), use
rt.merge_dims(0, -1). To mimic the behavior of
tf.layers.Flatten(which
flattens all dimensions except the outermost batch dimension), use
rt.merge_dims(1, -1)`.
Args | |
---|---|
outer_axis
|
int : The first dimension in the range of dimensions to
merge. May be negative if self.shape.rank is statically known.
|
inner_axis
|
int : The last dimension in the range of dimensions to merge.
May be negative if self.shape.rank is statically known.
|
Returns | |
---|---|
A copy of this tensor, with the specified dimensions merged into a
single dimension. The shape of the returned tensor will be
self.shape[:outer_axis] + [N] + self.shape[inner_axis + 1:] , where N
is the total number of slices in the merged dimensions.
|
nested_row_lengths
nested_row_lengths(
name=None
)
Returns a tuple containing the row_lengths for all ragged dimensions.
rt.nested_row_lengths()
is a tuple containing the row_lengths
tensors
for all ragged dimensions in rt
, ordered from outermost to innermost.
Args | |
---|---|
name
|
A name prefix for the returned tensors (optional). |
Returns | |
---|---|
A tuple of 1-D integer Tensors . The length of the tuple is equal to
self.ragged_rank .
|
nested_value_rowids
nested_value_rowids(
name=None
)
Returns a tuple containing the value_rowids for all ragged dimensions.
rt.nested_value_rowids
is a tuple containing the value_rowids
tensors
for
all ragged dimensions in rt
, ordered from outermost to innermost. In
particular, rt.nested_value_rowids = (rt.value_rowids(),) + value_ids
where:
value_ids = ()
ifrt.values
is aTensor
.value_ids = rt.values.nested_value_rowids
otherwise.
Args | |
---|---|
name
|
A name prefix for the returned tensors (optional). |
Returns | |
---|---|
A tuple of 1-D integer Tensor s.
|
Example:
rt = tf.ragged.constant(
[[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]])
for i, ids in enumerate(rt.nested_value_rowids()):
print('row ids for dimension %d: %s' % (i+1, ids.numpy()))
row ids for dimension 1: [0 0 0]
row ids for dimension 2: [0 0 0 2 2]
row ids for dimension 3: [0 0 0 0 2 2 2 3]
nrows
nrows(
out_type=None, name=None
)
Returns the number of rows in this ragged tensor.
I.e., the size of the outermost dimension of the tensor.
Args | |
---|---|
out_type
|
dtype for the returned tensor. Defaults to
self.row_splits.dtype .
|
name
|
A name prefix for the returned tensor (optional). |
Returns | |
---|---|
A scalar Tensor with dtype out_type .
|
Example:
rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
print(rt.nrows()) # rt has 5 rows.
tf.Tensor(5, shape=(), dtype=int64)
numpy
numpy()
Returns a numpy array
with the values for this RaggedTensor
.
Requires that this RaggedTensor
was constructed in eager execution mode.
Ragged dimensions are encoded using numpy arrays
with dtype=object
and
rank=1
, where each element is a single row.
Examples
In the following example, the value returned by RaggedTensor.numpy()
contains three numpy array
objects: one for each row (with rank=1
and
dtype=int64
), and one to combine them (with rank=1
and dtype=object
):
tf.ragged.constant([[1, 2, 3], [4, 5]], dtype=tf.int64).numpy()
array([array([1, 2, 3]), array([4, 5])], dtype=object)
Uniform dimensions are encoded using multidimensional numpy array
s. In
the following example, the value returned by RaggedTensor.numpy()
contains
a single numpy array
object, with rank=2
and dtype=int64
:
tf.ragged.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int64).numpy()
array([[1, 2, 3], [4, 5, 6]])
Returns | |
---|---|
A numpy array .
|
row_lengths
row_lengths(
axis=1, name=None
)
Returns the lengths of the rows in this ragged tensor.
rt.row_lengths()[i]
indicates the number of values in the
i
th row of rt
.
Args | |
---|---|
axis
|
An integer constant indicating the axis whose row lengths should be returned. |
name
|
A name prefix for the returned tensor (optional). |
Returns | |
---|---|
A potentially ragged integer Tensor with shape self.shape[:axis] .
|
Raises | |
---|---|
ValueError
|
If axis is out of bounds.
|
Example:
rt = tf.ragged.constant(
[[[3, 1, 4], [1]], [], [[5, 9], [2]], [[6]], []])
print(rt.row_lengths()) # lengths of rows in rt
tf.Tensor([2 0 2 1 0], shape=(5,), dtype=int64)
print(rt.row_lengths(axis=2)) # lengths of axis=2 rows.
<tf.RaggedTensor [[3, 1], [], [2, 1], [1], []]>
row_limits
row_limits(
name=None
)
Returns the limit indices for rows in this ragged tensor.
These indices specify where the values for each row end in
self.values
. rt.row_limits(self)
is equal to rt.row_splits[:-1]
.
Args | |
---|---|
name
|
A name prefix for the returned tensor (optional). |
Returns | |
---|---|
A 1-D integer Tensor with shape [nrows] .
The returned tensor is nonnegative, and is sorted in ascending order.
|
Example:
rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
print(rt.row_limits()) # indices of row limits in rt.values
tf.Tensor([4 4 7 8 8], shape=(5,), dtype=int64)
row_starts
row_starts(
name=None
)
Returns the start indices for rows in this ragged tensor.
These indices specify where the values for each row begin in
self.values
. rt.row_starts()
is equal to rt.row_splits[:-1]
.
Args | |
---|---|
name
|
A name prefix for the returned tensor (optional). |
Returns | |
---|---|
A 1-D integer Tensor with shape [nrows] .
The returned tensor is nonnegative, and is sorted in ascending order.
|
Example:
rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
print(rt.row_starts()) # indices of row starts in rt.values
tf.Tensor([0 4 4 7 8], shape=(5,), dtype=int64)
to_list
to_list()
Returns a nested Python list
with the values for this RaggedTensor
.
Requires that rt
was constructed in eager execution mode.
Returns | |
---|---|
A nested Python list .
|
to_sparse
to_sparse(
name=None
)
Converts this RaggedTensor
into a tf.sparse.SparseTensor
.
Example:
rt = tf.ragged.constant([[1, 2, 3], [4], [], [5, 6]])
print(rt.to_sparse())
SparseTensor(indices=tf.Tensor(
[[0 0] [0 1] [0 2] [1 0] [3 0] [3 1]],
shape=(6, 2), dtype=int64),
values=tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32),
dense_shape=tf.Tensor([4 3], shape=(2,), dtype=int64))
Args | |
---|---|
name
|
A name prefix for the returned tensors (optional). |
Returns | |
---|---|
A SparseTensor with the same values as self .
|
to_tensor
to_tensor(
default_value=None, name=None, shape=None
)
Converts this RaggedTensor
into a tf.Tensor
.
If shape
is specified, then the result is padded and/or truncated to
the specified shape.
Examples:
rt = tf.ragged.constant([[9, 8, 7], [], [6, 5], [4]])
print(rt.to_tensor())
tf.Tensor(
[[9 8 7] [0 0 0] [6 5 0] [4 0 0]], shape=(4, 3), dtype=int32)
print(rt.to_tensor(shape=[5, 2]))
tf.Tensor(
[[9 8] [0 0] [6 5] [4 0] [0 0]], shape=(5, 2), dtype=int32)
Args | |
---|---|
default_value
|
Value to set for indices not specified in self . Defaults
to zero. default_value must be broadcastable to
self.shape[self.ragged_rank + 1:] .
|
name
|
A name prefix for the returned tensors (optional). |
shape
|
The shape of the resulting dense tensor. In particular,
result.shape[i] is shape[i] (if shape[i] is not None), or
self.bounding_shape(i) (otherwise).shape.rank must be None or
equal to self.rank .
|
Returns | |
---|---|
A Tensor with shape ragged.bounding_shape(self) and the
values specified by the non-empty values in self . Empty values are
assigned default_value .
|
value_rowids
value_rowids(
name=None
)
Returns the row indices for the values
in this ragged tensor.
rt.value_rowids()
corresponds one-to-one with the outermost dimension of
rt.values
, and specifies the row containing each value. In particular,
the row rt[row]
consists of the values rt.values[j]
where
rt.value_rowids()[j] == row
.
Args | |
---|---|
name
|
A name prefix for the returned tensor (optional). |
Returns | |
---|---|
A 1-D integer Tensor with shape self.values.shape[:1] .
The returned tensor is nonnegative, and is sorted in ascending order.
|
Example:
rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
print(rt.value_rowids()) # corresponds 1:1 with rt.values
tf.Tensor([0 0 0 0 2 2 2 3], shape=(8,), dtype=int64)
with_flat_values
with_flat_values(
new_values
)
Returns a copy of self
with flat_values
replaced by new_value
.
Preserves cached row-partitioning tensors such as self.cached_nrows
and
self.cached_value_rowids
if they have values.
Args | |
---|---|
new_values
|
Potentially ragged tensor that should replace
self.flat_values . Must have rank > 0 , and must have the same number
of rows as self.flat_values .
|
Returns | |
---|---|
A RaggedTensor .
result.rank = self.ragged_rank + new_values.rank .
result.ragged_rank = self.ragged_rank + new_values.ragged_rank .
|
with_row_splits_dtype
with_row_splits_dtype(
dtype
)
Returns a copy of this RaggedTensor with the given row_splits
dtype.
For RaggedTensors with multiple ragged dimensions, the row_splits
for all
nested RaggedTensor
objects are cast to the given dtype.
Args | |
---|---|
dtype
|
The dtype for row_splits . One of tf.int32 or tf.int64 .
|
Returns | |
---|---|
A copy of this RaggedTensor, with the row_splits cast to the given
type.
|
with_values
with_values(
new_values
)
Returns a copy of self
with values
replaced by new_value
.
Preserves cached row-partitioning tensors such as self.cached_nrows
and
self.cached_value_rowids
if they have values.
Args | |
---|---|
new_values
|
Potentially ragged tensor to use as the values for the
returned RaggedTensor . Must have rank > 0 , and must have the same
number of rows as self.values .
|
Returns | |
---|---|
A RaggedTensor . result.rank = 1 + new_values.rank .
result.ragged_rank = 1 + new_values.ragged_rank
|
__abs__
__abs__(
name=None
)
Computes the absolute value of a ragged tensor.
Given a ragged tensor of integer or floating-point values, this operation returns a ragged tensor of the same type, where each element contains the absolute value of the corresponding element in the input.
Given a ragged tensor x
of complex numbers, this operation returns a tensor
of type float32
or float64
that is the absolute value of each element in
x
. For a complex number \(a + bj\), its absolute value is computed as
\(\sqrt{a^2 + b^2}\).
For example:
# real number
x = tf.ragged.constant([[-2.2, 3.2], [-4.2]])
tf.abs(x)
<tf.RaggedTensor [[2.2, 3.2], [4.2]]>
# complex number
x = tf.ragged.constant([[-2.2 + 4.7j], [-3.2 + 5.7j], [-4.2 + 6.7j]])
tf.abs(x)
<tf.RaggedTensor [[5.189412298131649],
[6.536818798161687],
[7.907591289387685]]>
Args | |
---|---|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A RaggedTensor of the same size and type as x , with absolute values.
Note, for complex64 or complex128 input, the returned RaggedTensor
will be of type float32 or float64 , respectively.
|
__add__
__add__(
y, name=None
)
Returns x + y element-wise.
Example usages below.
Add a scalar and a list:
x = [1, 2, 3, 4, 5]
y = 1
tf.add(x, y)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([2, 3, 4, 5, 6],
dtype=int32)>
Note that binary +
operator can be used instead:
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x + y
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([2, 3, 4, 5, 6],
dtype=int32)>
Add a tensor and a list of same shape:
x = [1, 2, 3, 4, 5]
y = tf.constant([1, 2, 3, 4, 5])
tf.add(x, y)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 2, 4, 6, 8, 10], dtype=int32)>
For example,
x = tf.constant([1, 2], dtype=tf.int8)
y = [2**7 + 1, 2**7 + 2]
tf.add(x, y)
<tf.Tensor: shape=(2,), dtype=int8, numpy=array([-126, -124], dtype=int8)>
When adding two input values of different shapes, Add
follows NumPy
broadcasting rules. The two input array shapes are compared element-wise.
Starting with the trailing dimensions, the two dimensions either have to be
equal or one of them needs to be 1
.
For example,
x = np.ones(6).reshape(1, 2, 1, 3)
y = np.ones(6).reshape(2, 1, 3, 1)
tf.add(x, y).shape.as_list()
[2, 2, 3, 3]
Another example with two arrays of different dimension.
x = np.ones([1, 2, 1, 4])
y = np.ones([3, 4])
tf.add(x, y).shape.as_list()
[1, 2, 3, 4]
The reduction version of this elementwise operation is tf.math.reduce_sum
Args | |
---|---|
x
|
A tf.Tensor . Must be one of the following types: bfloat16, half,
float16, float32, float64, uint8, uint16, uint32, uint64, int8, int16,
int32, int64, complex64, complex128, string.
|
y
|
A tf.Tensor . Must have the same type as x.
|
name
|
A name for the operation (optional) |
__and__
__and__(
y, name=None
)
Returns the truth value of elementwise x & y
.
Logical AND function.
Requires that x
and y
have the same shape or have
broadcast-compatible
shapes. For example, y
can be:
- A single Python boolean, where the result will be calculated by applying
logical AND with the single element to each element in
x
. - A
tf.Tensor
object of dtypetf.bool
of the same shape or broadcast-compatible shape. In this case, the result will be the element-wise logical AND ofx
andy
. - A
tf.RaggedTensor
object of dtypetf.bool
of the same shape or broadcast-compatible shape. In this case, the result will be the element-wise logical AND ofx
andy
.
For example:
# `y` is a Python boolean
x = tf.ragged.constant([[True, False], [True]])
y = True
x & y
<tf.RaggedTensor [[True, False], [True]]>
tf.math.logical_and(x, y) # Equivalent of x & y
<tf.RaggedTensor [[True, False], [True]]>
y & x
<tf.RaggedTensor [[True, False], [True]]>
tf.math.reduce_all(x & y) # Reduce to a scalar bool Tensor.
<tf.Tensor: shape=(), dtype=bool, numpy=False>
# `y` is a tf.Tensor of the same shape.
x = tf.ragged.constant([[True, False], [True, False]])
y = tf.constant([[True, False], [False, True]])
x & y
<tf.RaggedTensor [[True, False], [False, False]]>
# `y` is a tf.Tensor of a broadcast-compatible shape.
x = tf.ragged.constant([[True, False], [True]])
y = tf.constant([[True], [False]])
x & y
<tf.RaggedTensor [[True, False], [False]]>
# `y` is a `tf.RaggedTensor` of the same shape.
x = tf.ragged.constant([[True, False], [True]])
y = tf.ragged.constant([[False, True], [True]])
x & y
<tf.RaggedTensor [[False, False], [True]]>
# `y` is a `tf.RaggedTensor` of a broadcast-compatible shape.
x = tf.ragged.constant([[[True, True, False]], [[]], [[True, False]]])
y = tf.ragged.constant([[[True]], [[True]], [[False]]], ragged_rank=1)
x & y
<tf.RaggedTensor [[[True, True, False]], [[]], [[False, False]]]>
Args | |
---|---|
y
|
A Python boolean or a tf.Tensor or tf.RaggedTensor of dtype
tf.bool .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tf.RaggedTensor of dtype tf.bool with the shape that x and y
broadcast to.
|
__bool__
__bool__()
Raises TypeError when a RaggedTensor is used as a Python bool.
To prevent RaggedTensor from being used as a bool, this function always raise TypeError when being called.
For example:
x = tf.ragged.constant([[1, 2], [3]])
result = True if x else False # Evaluate x as a bool value.
Traceback (most recent call last):
TypeError: RaggedTensor may not be used as a boolean.
x = tf.ragged.constant([[1]])
r = (x == 1) # tf.RaggedTensor [[True]]
if r: # Evaluate r as a bool value.
pass
Traceback (most recent call last):
TypeError: RaggedTensor may not be used as a boolean.
__div__
__div__(
y, name=None
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
This function divides x
and y
, forcing Python 2 semantics. That is, if x
and y
are both integers then the result will be an integer. This is in
contrast to Python 3, where division with /
is always a float while division
with //
is always an integer.
Args | |
---|---|
x
|
Tensor numerator of real numeric type.
|
y
|
Tensor denominator of real numeric type.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
x / y returns the quotient of x and y.
|
Migrate to TF2
This function is deprecated in TF2. Prefer using the Tensor division operator,
tf.divide
, or tf.math.divide
, which obey the Python 3 division operator
semantics.
__eq__
__eq__(
other
)
Returns result of elementwise ==
or False if not broadcast-compatible.
Compares two ragged tensors elemewise for equality if they are broadcast-compatible; or returns False if they are not broadcast-compatible.
Note that this behavior differs from tf.math.equal
, which raises an
exception if the two ragged tensors are not broadcast-compatible.
For example:
rt1 = tf.ragged.constant([[1, 2], [3]])
rt1 == rt1
<tf.RaggedTensor [[True, True], [True]]>
rt2 = tf.ragged.constant([[1, 2], [4]])
rt1 == rt2
<tf.RaggedTensor [[True, True], [False]]>
rt3 = tf.ragged.constant([[1, 2], [3, 4]])
# rt1 and rt3 are not broadcast-compatible.
rt1 == rt3
False
# You can also compare a `tf.RaggedTensor` to a `tf.Tensor`.
t = tf.constant([[1, 2], [3, 4]])
rt1 == t
False
t == rt1
False
rt4 = tf.ragged.constant([[1, 2], [3, 4]])
rt4 == t
<tf.RaggedTensor [[True, True], [True, True]]>
t == rt4
<tf.RaggedTensor [[True, True], [True, True]]>
Args | |
---|---|
other
|
The right-hand side of the == operator.
|
Returns | |
---|---|
The ragged tensor result of the elementwise == operation, or False if
the arguments are not broadcast-compatible.
|
__floordiv__
__floordiv__(
y, name=None
)
Divides x / y
elementwise, rounding toward the most negative integer.
Mathematically, this is equivalent to floor(x / y). For example: floor(8.4 / 4.0) = floor(2.1) = 2.0 floor(-8.4 / 4.0) = floor(-2.1) = -3.0 This is equivalent to the '//' operator in Python 3.0 and above.
Args | |
---|---|
x
|
Tensor numerator of real numeric type.
|
y
|
Tensor denominator of real numeric type.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
x / y rounded toward -infinity.
|
Raises | |
---|---|
TypeError
|
If the inputs are complex. |
__ge__
__ge__(
other
)
Elementwise >=
comparison of two convertible-to-ragged-tensor values.
Computes the elemewise >=
comparison of two values that are convertible to
ragged tenors, with broadcasting support.
Raises an exception if two values are not broadcast-compatible.
For example:
rt1 = tf.ragged.constant([[1, 2], [3]])
rt1 >= rt1
<tf.RaggedTensor [[True, True], [True]]>
rt2 = tf.ragged.constant([[2, 1], [3]])
rt1 >= rt2
<tf.RaggedTensor [[False, True], [True]]>
rt3 = tf.ragged.constant([[1, 2], [3, 4]])
# rt1 and rt3 are not broadcast-compatible.
rt1 >= rt3
Traceback (most recent call last):
InvalidArgumentError: ...
# You can also compare a `tf.RaggedTensor` to a `tf.Tensor`.
rt4 = tf.ragged.constant([[1, 2],[3, 4]])
t1 = tf.constant([[2, 1], [4, 3]])
rt4 >= t1
<tf.RaggedTensor [[False, True],
[False, True]]>
t1 >= rt4
<tf.RaggedTensor [[True, False],
[True, False]]>
# Compares a `tf.RaggedTensor` to a `tf.Tensor` with broadcasting.
t2 = tf.constant([[2]])
rt4 >= t2
<tf.RaggedTensor [[False, True],
[True, True]]>
t2 >= rt4
<tf.RaggedTensor [[True, True],
[False, False]]>
Args | |
---|---|
other
|
The right-hand side of the >= operator.
|
Returns | |
---|---|
A tf.RaggedTensor of dtype tf.bool with the shape that self and
other broadcast to.
|
Raises | |
---|---|
InvalidArgumentError
|
If self and other are not broadcast-compatible.
|
__getitem__
__getitem__(
key
)
Returns the specified piece of this RaggedTensor.
Supports multidimensional indexing and slicing, with one restriction: indexing into a ragged inner dimension is not allowed. This case is problematic because the indicated value may exist in some rows but not others. In such cases, it's not obvious whether we should (1) report an IndexError; (2) use a default value; or (3) skip that value and return a tensor with fewer rows than we started with. Following the guiding principles of Python ("In the face of ambiguity, refuse the temptation to guess"), we simply disallow this operation.
Args | |
---|---|
rt_input
|
The RaggedTensor to slice. |
key
|
Indicates which piece of the RaggedTensor to return, using standard
Python semantics (e.g., negative values index from the end). key
may have any of the following types:
|
Returns | |
---|---|
A Tensor or RaggedTensor object. Values that include at least one
ragged dimension are returned as RaggedTensor . Values that include no
ragged dimensions are returned as Tensor . See above for examples of
expressions that return Tensor s vs RaggedTensor s.
|
Raises | |
---|---|
ValueError
|
If key is out of bounds.
|
ValueError
|
If key is not supported.
|
TypeError
|
If the indices in key have an unsupported type.
|
Examples:
# A 2-D ragged tensor with 1 ragged dimension.
rt = tf.ragged.constant([['a', 'b', 'c'], ['d', 'e'], ['f'], ['g']])
rt[0].numpy() # First row (1-D `Tensor`)
array([b'a', b'b', b'c'], dtype=object)
rt[:3].to_list() # First three rows (2-D RaggedTensor)
[[b'a', b'b', b'c'], [b'd', b'e'], [b'f']]
rt[3, 0].numpy() # 1st element of 4th row (scalar)
b'g'
# A 3-D ragged tensor with 2 ragged dimensions.
rt = tf.ragged.constant([[[1, 2, 3], [4]],
[[5], [], [6]],
[[7]],
[[8, 9], [10]]])
rt[1].to_list() # Second row (2-D RaggedTensor)
[[5], [], [6]]
rt[3, 0].numpy() # First element of fourth row (1-D Tensor)
array([8, 9], dtype=int32)
rt[:, 1:3].to_list() # Items 1-3 of each row (3-D RaggedTensor)
[[[4]], [[], [6]], [], [[10]]]
rt[:, -1:].to_list() # Last item of each row (3-D RaggedTensor)
[[[4]], [[6]], [[7]], [[10]]]
__gt__
__gt__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x > y) element-wise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__invert__
__invert__(
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of NOT x
element-wise.
Example:
tf.math.logical_not(tf.constant([True, False]))
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, True])>
Args | |
---|---|
x
|
A Tensor of type bool . A Tensor of type bool .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__le__
__le__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x <= y) element-wise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__lt__
__lt__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x < y) element-wise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__mod__
__mod__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns element-wise remainder of division.
This follows Python semantics in that the
result here is consistent with a flooring divide. E.g.
floor(x / y) * y + floormod(x, y) = x
, regardless of the signs of x and y.
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: int8 , int16 , int32 , int64 , uint8 , uint16 , uint32 , uint64 , bfloat16 , half , float32 , float64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as x .
|
__mul__
__mul__(
y, name=None
)
Returns an element-wise x * y.
For example:
x = tf.constant(([1, 2, 3, 4]))
tf.math.multiply(x, x)
<tf.Tensor: shape=(4,), dtype=..., numpy=array([ 1, 4, 9, 16], dtype=int32)>
Since tf.math.multiply
will convert its arguments to Tensor
s, you can also
pass in non-Tensor
arguments:
tf.math.multiply(7,6)
<tf.Tensor: shape=(), dtype=int32, numpy=42>
If x.shape
is not the same as y.shape
, they will be broadcast to a
compatible shape. (More about broadcasting
here.)
For example:
x = tf.ones([1, 2]);
y = tf.ones([2, 1]);
x * y # Taking advantage of operator overriding
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[1., 1.],
[1., 1.]], dtype=float32)>
The reduction version of this elementwise operation is tf.math.reduce_prod
Args | |
---|---|
x
|
A Tensor. Must be one of the following types: bfloat16 ,
half , float32 , float64 , uint8 , int8 , uint16 ,
int16 , int32 , int64 , complex64 , complex128 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns |
---|
A Tensor
. Has the same type as x
.
Raises | |
---|---|
|
__ne__
__ne__(
other
)
The operation invoked by the Tensor.ne
operator.
Compares two tensors element-wise for inequality if they are
broadcast-compatible; or returns True if they are not broadcast-compatible.
(Note that this behavior differs from tf.math.not_equal
, which raises an
exception if the two tensors are not broadcast-compatible.)
Purpose in the API | |
---|---|
This method is exposed in TensorFlow's API so that library developers
can register dispatching for Tensor.ne to allow it to handle
custom composite tensors & other custom objects.
The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation. |
Args | |
---|---|
self
|
The left-hand side of the != operator.
|
other
|
The right-hand side of the != operator.
|
Returns | |
---|---|
The result of the elementwise != operation, or True if the arguments
are not broadcast-compatible.
|
__neg__
__neg__(
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Computes numerical negative value element-wise.
I.e., \(y = -x\).
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , int8 , int16 , int32 , int64 , complex64 , complex128 .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as x .
If |
__nonzero__
__nonzero__()
Raises TypeError when a RaggedTensor is used as a Python bool.
To prevent RaggedTensor from being used as a bool, this function always raise TypeError when being called.
For example:
x = tf.ragged.constant([[1, 2], [3]])
result = True if x else False # Evaluate x as a bool value.
Traceback (most recent call last):
TypeError: RaggedTensor may not be used as a boolean.
x = tf.ragged.constant([[1]])
r = (x == 1) # tf.RaggedTensor [[True]]
if r: # Evaluate r as a bool value.
pass
Traceback (most recent call last):
TypeError: RaggedTensor may not be used as a boolean.
__or__
__or__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of x OR y element-wise.
Logical OR function.
Requires that x
and y
have the same shape or have
broadcast-compatible
shapes. For example, x
and y
can be:
- Two single elements of type
bool
. - One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical OR with the single element to each element in the larger Tensor. - Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the element-wise logical OR of the two input tensors.
You can also use the |
operator instead.
Usage | |
---|---|
This op also supports broadcasting
|
The reduction version of this elementwise operation is tf.math.reduce_any
.
Args | |
---|---|
x
|
A tf.Tensor of type bool.
|
y
|
A tf.Tensor of type bool.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tf.Tensor of type bool with the shape that x and y broadcast to.
|
Args | |
---|---|
x
|
A Tensor of type bool .
|
y
|
A Tensor of type bool .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__pow__
__pow__(
y, name=None
)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
Args | |
---|---|
x
|
A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .
|
y
|
A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor .
|
__radd__
__radd__(
y, name=None
)
Returns x + y element-wise.
Example usages below.
Add a scalar and a list:
x = [1, 2, 3, 4, 5]
y = 1
tf.add(x, y)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([2, 3, 4, 5, 6],
dtype=int32)>
Note that binary +
operator can be used instead:
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x + y
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([2, 3, 4, 5, 6],
dtype=int32)>
Add a tensor and a list of same shape:
x = [1, 2, 3, 4, 5]
y = tf.constant([1, 2, 3, 4, 5])
tf.add(x, y)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 2, 4, 6, 8, 10], dtype=int32)>
For example,
x = tf.constant([1, 2], dtype=tf.int8)
y = [2**7 + 1, 2**7 + 2]
tf.add(x, y)
<tf.Tensor: shape=(2,), dtype=int8, numpy=array([-126, -124], dtype=int8)>
When adding two input values of different shapes, Add
follows NumPy
broadcasting rules. The two input array shapes are compared element-wise.
Starting with the trailing dimensions, the two dimensions either have to be
equal or one of them needs to be 1
.
For example,
x = np.ones(6).reshape(1, 2, 1, 3)
y = np.ones(6).reshape(2, 1, 3, 1)
tf.add(x, y).shape.as_list()
[2, 2, 3, 3]
Another example with two arrays of different dimension.
x = np.ones([1, 2, 1, 4])
y = np.ones([3, 4])
tf.add(x, y).shape.as_list()
[1, 2, 3, 4]
The reduction version of this elementwise operation is tf.math.reduce_sum
Args | |
---|---|
x
|
A tf.Tensor . Must be one of the following types: bfloat16, half,
float16, float32, float64, uint8, uint16, uint32, uint64, int8, int16,
int32, int64, complex64, complex128, string.
|
y
|
A tf.Tensor . Must have the same type as x.
|
name
|
A name for the operation (optional) |
__rand__
__rand__(
y, name=None
)
Returns the truth value of elementwise x & y
.
Logical AND function.
Requires that x
and y
have the same shape or have
broadcast-compatible
shapes. For example, y
can be:
- A single Python boolean, where the result will be calculated by applying
logical AND with the single element to each element in
x
. - A
tf.Tensor
object of dtypetf.bool
of the same shape or broadcast-compatible shape. In this case, the result will be the element-wise logical AND ofx
andy
. - A
tf.RaggedTensor
object of dtypetf.bool
of the same shape or broadcast-compatible shape. In this case, the result will be the element-wise logical AND ofx
andy
.
For example:
# `y` is a Python boolean
x = tf.ragged.constant([[True, False], [True]])
y = True
x & y
<tf.RaggedTensor [[True, False], [True]]>
tf.math.logical_and(x, y) # Equivalent of x & y
<tf.RaggedTensor [[True, False], [True]]>
y & x
<tf.RaggedTensor [[True, False], [True]]>
tf.math.reduce_all(x & y) # Reduce to a scalar bool Tensor.
<tf.Tensor: shape=(), dtype=bool, numpy=False>
# `y` is a tf.Tensor of the same shape.
x = tf.ragged.constant([[True, False], [True, False]])
y = tf.constant([[True, False], [False, True]])
x & y
<tf.RaggedTensor [[True, False], [False, False]]>
# `y` is a tf.Tensor of a broadcast-compatible shape.
x = tf.ragged.constant([[True, False], [True]])
y = tf.constant([[True], [False]])
x & y
<tf.RaggedTensor [[True, False], [False]]>
# `y` is a `tf.RaggedTensor` of the same shape.
x = tf.ragged.constant([[True, False], [True]])
y = tf.ragged.constant([[False, True], [True]])
x & y
<tf.RaggedTensor [[False, False], [True]]>
# `y` is a `tf.RaggedTensor` of a broadcast-compatible shape.
x = tf.ragged.constant([[[True, True, False]], [[]], [[True, False]]])
y = tf.ragged.constant([[[True]], [[True]], [[False]]], ragged_rank=1)
x & y
<tf.RaggedTensor [[[True, True, False]], [[]], [[False, False]]]>
Args | |
---|---|
y
|
A Python boolean or a tf.Tensor or tf.RaggedTensor of dtype
tf.bool .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tf.RaggedTensor of dtype tf.bool with the shape that x and y
broadcast to.
|
__rdiv__
__rdiv__(
y, name=None
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
This function divides x
and y
, forcing Python 2 semantics. That is, if x
and y
are both integers then the result will be an integer. This is in
contrast to Python 3, where division with /
is always a float while division
with //
is always an integer.
Args | |
---|---|
x
|
Tensor numerator of real numeric type.
|
y
|
Tensor denominator of real numeric type.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
x / y returns the quotient of x and y.
|
Migrate to TF2
This function is deprecated in TF2. Prefer using the Tensor division operator,
tf.divide
, or tf.math.divide
, which obey the Python 3 division operator
semantics.
__rfloordiv__
__rfloordiv__(
y, name=None
)
Divides x / y
elementwise, rounding toward the most negative integer.
Mathematically, this is equivalent to floor(x / y). For example: floor(8.4 / 4.0) = floor(2.1) = 2.0 floor(-8.4 / 4.0) = floor(-2.1) = -3.0 This is equivalent to the '//' operator in Python 3.0 and above.
Args | |
---|---|
x
|
Tensor numerator of real numeric type.
|
y
|
Tensor denominator of real numeric type.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
x / y rounded toward -infinity.
|
Raises | |
---|---|
TypeError
|
If the inputs are complex. |
__rmod__
__rmod__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns element-wise remainder of division.
This follows Python semantics in that the
result here is consistent with a flooring divide. E.g.
floor(x / y) * y + floormod(x, y) = x
, regardless of the signs of x and y.
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: int8 , int16 , int32 , int64 , uint8 , uint16 , uint32 , uint64 , bfloat16 , half , float32 , float64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as x .
|
__rmul__
__rmul__(
y, name=None
)
Returns an element-wise x * y.
For example:
x = tf.constant(([1, 2, 3, 4]))
tf.math.multiply(x, x)
<tf.Tensor: shape=(4,), dtype=..., numpy=array([ 1, 4, 9, 16], dtype=int32)>
Since tf.math.multiply
will convert its arguments to Tensor
s, you can also
pass in non-Tensor
arguments:
tf.math.multiply(7,6)
<tf.Tensor: shape=(), dtype=int32, numpy=42>
If x.shape
is not the same as y.shape
, they will be broadcast to a
compatible shape. (More about broadcasting
here.)
For example:
x = tf.ones([1, 2]);
y = tf.ones([2, 1]);
x * y # Taking advantage of operator overriding
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[1., 1.],
[1., 1.]], dtype=float32)>
The reduction version of this elementwise operation is tf.math.reduce_prod
Args | |
---|---|
x
|
A Tensor. Must be one of the following types: bfloat16 ,
half , float32 , float64 , uint8 , int8 , uint16 ,
int16 , int32 , int64 , complex64 , complex128 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns |
---|
A Tensor
. Has the same type as x
.
Raises | |
---|---|
|
__ror__
__ror__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of x OR y element-wise.
Logical OR function.
Requires that x
and y
have the same shape or have
broadcast-compatible
shapes. For example, x
and y
can be:
- Two single elements of type
bool
. - One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical OR with the single element to each element in the larger Tensor. - Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the element-wise logical OR of the two input tensors.
You can also use the |
operator instead.
Usage | |
---|---|
This op also supports broadcasting
|
The reduction version of this elementwise operation is tf.math.reduce_any
.
Args | |
---|---|
x
|
A tf.Tensor of type bool.
|
y
|
A tf.Tensor of type bool.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tf.Tensor of type bool with the shape that x and y broadcast to.
|
Args | |
---|---|
x
|
A Tensor of type bool .
|
y
|
A Tensor of type bool .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__rpow__
__rpow__(
y, name=None
)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
Args | |
---|---|
x
|
A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .
|
y
|
A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor .
|
__rsub__
__rsub__(
y, name=None
)
Returns x - y element-wise.
Both input and output have a range (-inf, inf)
.
Example usages below.
Subtract operation between an array and a scalar:
x = [1, 2, 3, 4, 5]
y = 1
tf.subtract(x, y)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 0, -1, -2, -3, -4], dtype=int32)>
Note that binary -
operator can be used instead:
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x - y
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
Subtract operation between an array and a tensor of same shape:
x = [1, 2, 3, 4, 5]
y = tf.constant([5, 4, 3, 2, 1])
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 4, 2, 0, -2, -4], dtype=int32)>
For example,
x = tf.constant([1, 2], dtype=tf.int8)
y = [2**8 + 1, 2**8 + 2]
tf.subtract(x, y)
<tf.Tensor: shape=(2,), dtype=int8, numpy=array([0, 0], dtype=int8)>
When subtracting two input values of different shapes, tf.subtract
follows the
general broadcasting rules
. The two input array shapes are compared element-wise. Starting with the
trailing dimensions, the two dimensions either have to be equal or one of them
needs to be 1
.
For example,
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(2, 1, 3)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 3), dtype=float64, numpy=
array([[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]])>
Example with inputs of different dimensions:
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(1, 6)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 6), dtype=float64, numpy=
array([[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]]])>
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as x .
|
__rtruediv__
__rtruediv__(
y, name=None
)
Divides x / y elementwise (using Python 3 division operator semantics).
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y
division in Python 3 and in Python 2.7 with
from __future__ import division
. If you want integer division that rounds
down, use x // y
or tf.math.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args | |
---|---|
x
|
Tensor numerator of numeric type.
|
y
|
Tensor denominator of numeric type.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
x / y evaluated in floating point.
|
Raises | |
---|---|
TypeError
|
If x and y have different dtypes.
|
__rxor__
__rxor__(
y, name='LogicalXor'
)
Logical XOR function.
x ^ y = (x | y) & ~(x & y)
Requires that x
and y
have the same shape or have
broadcast-compatible
shapes. For example, x
and y
can be:
- Two single elements of type
bool
- One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor. - Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the element-wise logical XOR of the two input tensors.
Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_xor(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_xor(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False, True])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_xor(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
Args | |
---|---|
x
|
A tf.Tensor type bool.
|
y
|
A tf.Tensor of type bool.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tf.Tensor of type bool with the same size as that of x or y.
|
__sub__
__sub__(
y, name=None
)
Returns x - y element-wise.
Both input and output have a range (-inf, inf)
.
Example usages below.
Subtract operation between an array and a scalar:
x = [1, 2, 3, 4, 5]
y = 1
tf.subtract(x, y)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 0, -1, -2, -3, -4], dtype=int32)>
Note that binary -
operator can be used instead:
x = tf.convert_to_tensor([1, 2, 3, 4, 5])
y = tf.convert_to_tensor(1)
x - y
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 2, 3, 4], dtype=int32)>
Subtract operation between an array and a tensor of same shape:
x = [1, 2, 3, 4, 5]
y = tf.constant([5, 4, 3, 2, 1])
tf.subtract(y, x)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 4, 2, 0, -2, -4], dtype=int32)>
For example,
x = tf.constant([1, 2], dtype=tf.int8)
y = [2**8 + 1, 2**8 + 2]
tf.subtract(x, y)
<tf.Tensor: shape=(2,), dtype=int8, numpy=array([0, 0], dtype=int8)>
When subtracting two input values of different shapes, tf.subtract
follows the
general broadcasting rules
. The two input array shapes are compared element-wise. Starting with the
trailing dimensions, the two dimensions either have to be equal or one of them
needs to be 1
.
For example,
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(2, 1, 3)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 3), dtype=float64, numpy=
array([[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]])>
Example with inputs of different dimensions:
x = np.ones(6).reshape(2, 3, 1)
y = np.ones(6).reshape(1, 6)
tf.subtract(x, y)
<tf.Tensor: shape=(2, 3, 6), dtype=float64, numpy=
array([[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]]])>
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as x .
|
__truediv__
__truediv__(
y, name=None
)
Divides x / y elementwise (using Python 3 division operator semantics).
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y
division in Python 3 and in Python 2.7 with
from __future__ import division
. If you want integer division that rounds
down, use x // y
or tf.math.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args | |
---|---|
x
|
Tensor numerator of numeric type.
|
y
|
Tensor denominator of numeric type.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
x / y evaluated in floating point.
|
Raises | |
---|---|
TypeError
|
If x and y have different dtypes.
|
__xor__
__xor__(
y, name='LogicalXor'
)
Logical XOR function.
x ^ y = (x | y) & ~(x & y)
Requires that x
and y
have the same shape or have
broadcast-compatible
shapes. For example, x
and y
can be:
- Two single elements of type
bool
- One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor. - Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the element-wise logical XOR of the two input tensors.
Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_xor(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_xor(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False, True])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_xor(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
Args | |
---|---|
x
|
A tf.Tensor type bool.
|
y
|
A tf.Tensor of type bool.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A tf.Tensor of type bool with the same size as that of x or y.
|