Remodela um tensor quantizado de acordo com a operação Reshape.
Resumo
Arguments:
* scope: A Scope object
* shape: Defines the shape of the output tensor.
* input_min: The minimum value of the input.
* input_max: The maximum value of the input.
Returns:
* `Output` output
* `Output` output_min: This value is copied from input_min.
* `Output` output_max: This value is copied from input_max. */
class QuantizedReshape {
public:
QuantizedReshape(const tensorflow::Scope& scope, tensorflow::Input tensor,
tensorflow::Input shape, tensorflow::Input input_min,
tensorflow::Input input_max);
**Note**: The rank of a tensor is not the same as the rank of a matrix. The rank
of a tensor is the number of indices required to uniquely select each element
of the tensor. Rank is also known as "order", "degree", or "ndims."
Given `tensor`, this operation returns a tensor that has the same values
as `tensor` with shape `shape`.
If one component of `shape` is the special value -1, the size of that dimension
is computed so that the total size remains constant. In particular, a `shape`
of `[-1]` flattens into 1-D. At most one component of `shape` can be -1.
If `shape` is 1-D or higher, then the operation returns a tensor with shape
`shape` filled with the values of `tensor`. In this case, the number of elements
implied by `shape` must be the same as the number of elements in `tensor`.
/** Assign `value` to the sliced l-value reference of `ref`.
The values of `value` are assigned to the positions in the variable
`ref` that are selected by the slice parameters. The slice parameters
`begin, `end`, `strides`, etc. work exactly as in `StridedSlice`.
NOTE this op currently does not support broadcasting and so `value`'s
shape must be exactly the shape produced by the slice of `ref`.
This op first slices `input` along the dimension `batch_dim`, and for each
slice `i`, reverses the first `seq_lengths[i]` elements along
the dimension `seq_dim`.
The elements of `seq_lengths` must obey `seq_lengths[i] <= input.dims[seq_dim]`,
and `seq_lengths` must be a vector of length `input.dims[batch_dim]`.
The output slice `i` along dimension `batch_dim` is then given by input
slice `i`, with the first `seq_lengths[i]` slices along dimension
`seq_dim` reversed.
Arguments:
* scope: A Scope object
* input: The input to reverse.
* seq_lengths: 1-D with length `input.dims(batch_dim)` and
`max(seq_lengths) <= input.dims(seq_dim)`
* seq_dim: The dimension which is partially reversed.
Optional attributes (see `Attrs`):
* batch_dim: The dimension along which reversal is performed.
Returns:
* `Output`: The partially reversed input. It has the same shape as `input`. */
class ReverseSequence {
public:
/// Optional attribute setters for ReverseSequence
struct Attrs {
/** The dimension along which reversal is performed.
Defaults to 0 */
TF_MUST_USE_RESULT Attrs BatchDim(int64 x) {
Attrs ret = *this;
ret.batch_dim_ = x;
return ret;
}
NOTE `tf.reverse` has now changed behavior in preparation for 1.0.
`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0.
Given a `tensor`, and a `int32` tensor `axis` representing the set of
dimensions of `tensor` to reverse. This operation reverses each dimension
`i` for which there exists `j` s.t. `axis[j] == i`.
`tensor` can have up to 8 dimensions. The number of dimensions specified
in `axis` may be 0 or more entries. If an index is specified more than
once, a InvalidArgument error is raised.
Arguments:
* scope: A Scope object
* tensor: Up to 8-D.
* axis: 1-D. The indices of the dimensions to reverse. Must be in the range
`[-rank(tensor), rank(tensor))`.
/** Scatter `updates` into a new tensor according to `indices`.
Creates a new tensor by applying sparse `updates` to individual values or
slices within a tensor (initially zero for numeric, empty for string) of
the given `shape` according to indices. This operator is the inverse of the
`tf.gather_nd` operator which extracts values or slices from a given tensor.
This operation is similar to tensor_scatter_add, except that the tensor is
zero-initialized. Calling `tf.scatter_nd(indices, values, shape)` is identical
to `tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values)`
If `indices` contains duplicates, then their updates are accumulated (summed).
**WARNING**: The order in which updates are applied is nondeterministic, so the
output will be nondeterministic if `indices` contains duplicates -- because
of some numerical approximation issues, numbers summed in different order
may yield different results.
`indices` is an integer tensor containing indices into a new tensor of shape
`shape`. The last dimension of `indices` can be at most the rank of `shape`:
indices.shape[-1] <= shape.rank
The last dimension of `indices` corresponds to indices into elements
(if `indices.shape[-1] = shape.rank`) or slices
(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of
`shape`. `updates` is a tensor with shape
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of scatter is to insert individual elements in a tensor by
index. For example, say we want to insert 4 scattered elements in a rank-1
tensor with 8 elements.
In Python, this scatter operation would look like this:
python indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) shape = tf.constant([8]) scatter = tf.scatter_nd(indices, updates, shape) with tf.Session() as sess: print(sess.run(scatter))
The resulting tensor would look like this:
[0, 11, 0, 10, 9, 0, 0, 12]
We can also, insert entire slices of a higher rank tensor all at once. For
example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values.
In Python, this scatter operation would look like this:
Note that on CPU, if an out of bound index is found, an error is returned.
On GPU, if an out of bound index is found, the index is ignored.
Arguments:
* scope: A Scope object
* indices: Index tensor.
* updates: Updates to scatter into output.
* shape: 1-D. The shape of the resulting tensor.
Returns:
* `Output`: A new tensor with the given shape and updates applied according
to the indices. */
class ScatterNd {
public:
ScatterNd(const tensorflow::Scope& scope, tensorflow::Input indices,
tensorflow::Input updates, tensorflow::Input shape);
operator ::tensorflow::Output() const { return output; }
operator ::tensorflow::Input() const { return output; }
::tensorflow::Node* node() const { return output.node(); }
/** Applies sparse addition to `input` using individual values or slices
from `updates` according to indices `indices`. The updates are non-aliasing:
`input` is only modified in-place if no other operations will use it.
Otherwise, a copy of `input` is made. This operation has a gradient with
respect to both `input` and `updates`.
`input` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `input`.
It must be shape \([d_0, ..., d_{Q-2}, K]\) where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or `(P-K)`-dimensional slices
(if `K < P`) along the `K`th dimension of `input`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
The resulting value `output` would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See `tf.scatter_nd` for more details about how to make updates to slices.
Arguments:
* scope: A Scope object
* input: A Tensor.
* indices: A Tensor. Must be one of the following types: `int32`, `int64`.
A tensor of indices into `input`.
* updates: A Tensor. Must have the same type as ref. A tensor of updated values
to add to `input`.
Returns:
* `Output`: A `Tensor` with the same shape as `input`, containing values of `input`
updated with `updates`. */
class ScatterNdNonAliasingAdd {
public:
ScatterNdNonAliasingAdd(const tensorflow::Scope& scope, tensorflow::Input
input, tensorflow::Input indices, tensorflow::Input
updates);
operator ::tensorflow::Output() const { return output; }
operator ::tensorflow::Input() const { return output; }
::tensorflow::Node* node() const { return output.node(); }
The output tensor is a tensor with dimensions described by 'size'
whose values are extracted from 'input' starting at the offsets in
'begin'.
*Requirements*:
0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n)
Arguments:
* scope: A Scope object
* begin: begin[i] specifies the offset into the 'i'th dimension of
'input' to slice from.
* size: size[i] specifies the number of elements of the 'i'th dimension
of 'input' to slice. If size[i] is -1, all remaining elements in dimension
i are included in the slice (i.e. this is equivalent to setting
size[i] = input.dim_size(i) - begin[i]).
This is a legacy version of the more general SpaceToBatchND.
Zero-pads and then rearranges (permutes) blocks of spatial data into batch.
More specifically, this op outputs a copy of the input tensor where values from
the `height` and `width` dimensions are moved to the `batch` dimension. After
the zero-padding, both `height` and `width` of the input must be divisible by the
block size.
Arguments:
* scope: A Scope object
* input: 4-D with shape `[batch, height, width, depth]`.
* paddings: 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies
the padding of the input with zeros across the spatial dimensions as follows:
The attr `block_size` must be greater than one. It indicates the block size.
* Non-overlapping blocks of size `block_size x block size` in the height and
width dimensions are rearranged into the batch dimension at each location.
* The batch of the output tensor is `batch * block_size * block_size`.
* Both height_pad and width_pad must be divisible by block_size.
This operation divides "spatial" dimensions `[1, ..., M]` of the input into a
grid of blocks of shape `block_shape`, and interleaves these blocks with the
"batch" dimension (0) such that in the output, the spatial dimensions
`[1, ..., M]` correspond to the position within the grid, and the batch
dimension combines both the position within a spatial block and the original
batch position. Prior to division into blocks, the spatial dimensions of the
input are optionally zero padded according to `paddings`. See below for a
precise description.
Arguments:
* scope: A Scope object
* input: N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,
where spatial_shape has `M` dimensions.
* block_shape: 1-D with shape `[M]`, all values must be >= 1.
* paddings: 2-D with shape `[M, 2]`, all values must be >= 0.
`paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension
`i + 1`, which corresponds to spatial dimension `i`. It is required that
`block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.
This operation is equivalent to the following steps:
1. Zero-pad the start and end of dimensions `[1, ..., M]` of the
input according to `paddings` to produce `padded` of shape `padded_shape`.
2. Reshape `padded` to `reshaped_padded` of shape:
Rearranges blocks of spatial data, into depth. More specifically,
this op outputs a copy of the input tensor where values from the `height`
and `width` dimensions are moved to the `depth` dimension.
The attr `block_size` indicates the input block size.
* Non-overlapping blocks of size `block_size x block size` are rearranged
into depth at each location.
* The depth of the output tensor is `block_size * block_size * input_depth`.
* The Y, X coordinates within each block of the input become the high order
component of the output channel index.
* The input tensor's height and width must be divisible by block_size.
The `data_format` attr specifies the layout of the input and output tensors
with the following options:
"NHWC": `[ batch, height, width, channels ]`
"NCHW": `[ batch, channels, height, width ]`
"NCHW_VECT_C":
`qint8 [ batch, channels / 4, height, width, 4 ]`
It is useful to consider the operation as transforming a 6-D Tensor.
e.g. for data_format = NHWC,
Each element in the input tensor can be specified via 6 coordinates,
ordered by decreasing memory layout significance as:
n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates
within the output image, bX, bY means coordinates
within the input block, iC means input channels).
The output would be a transpose to the following layout:
n,oY,oX,bY,bX,iC
This operation is useful for resizing the activations between convolutions
(but keeping all data), e.g. instead of pooling. It is also useful for training
purely convolutional models.
For example, given an input of shape `[1, 2, 2, 1]`, data_format = "NHWC" and
block_size = 2:
x = [[[[1], [2]], [[3], [4]]]]
This operation will output a tensor of shape `[1, 1, 1, 4]`:
[[[[1, 2, 3, 4]]]]
Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,
the corresponding output will have a single element (i.e. width and height are
both 1) and will have a depth of 4 channels (1 * block_size * block_size).
The output element shape is `[1, 1, 4]`.
For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.
/** Splits a tensor into `num_split` tensors along one dimension.
Arguments:
* scope: A Scope object
* axis: 0-D. The dimension along which to split. Must be in the range
`[-rank(value), rank(value))`.
* value: The tensor to split.
* num_split: The number of ways to split. Must evenly divide
`value.shape[split_dim]`.
Returns:
* `OutputList`: They are identically shaped tensors, whose shape matches that of `value`
except along `axis`, where their sizes are
`values.shape[split_dim] / num_split`. */
class Split {
public:
Split(const tensorflow::Scope& scope, tensorflow::Input axis,
tensorflow::Input value, int64 num_split);
tensorflow::Output operator[](size_t index) const { return output[index]; }
/** Splits a tensor into `num_split` tensors along one dimension.
Arguments:
* scope: A Scope object
* value: The tensor to split.
* size_splits: list containing the sizes of each output tensor along the split
dimension. Must sum to the dimension of value along split_dim.
Can contain one -1 indicating that dimension is to be inferred.
* axis: 0-D. The dimension along which to split. Must be in the range
`[-rank(value), rank(value))`.
Returns:
* `OutputList`: Tensors whose shape matches that of `value`
except along `axis`, where their sizes are
`size_splits[i]`. */
class SplitV {
public:
SplitV(const tensorflow::Scope& scope, tensorflow::Input value,
tensorflow::Input size_splits, tensorflow::Input axis, int64
num_split);
tensorflow::Output operator[](size_t index) const { return output[index]; }
/** Removes dimensions of size 1 from the shape of a tensor.
Given a tensor `input`, this operation returns a tensor of the same type with
all dimensions of size 1 removed. If you don't want to remove all size 1
dimensions, you can remove specific size 1 dimensions by specifying
`axis`.
For example:
't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t)) ==> [2, 3]
Or, to remove specific size 1 dimensions:
't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
Arguments:
* scope: A Scope object
* input: The `input` to squeeze.
Optional attributes (see `Attrs`):
* axis: If specified, only squeezes the dimensions listed. The dimension
index starts at 0. It is an error to squeeze a dimension that is not 1. Must
be in the range `[-rank(input), rank(input))`.
Returns:
* `Output`: Contains the same data as `input`, but has one or more dimensions of
size 1 removed. */
class Squeeze {
public:
/// Optional attribute setters for Squeeze
struct Attrs {
/** If specified, only squeezes the dimensions listed. The dimension
index starts at 0. It is an error to squeeze a dimension that is not 1. Must
be in the range `[-rank(input), rank(input))`.
Defaults to [] */
TF_MUST_USE_RESULT Attrs Axis(const gtl::ArraySlice& x) {
Attrs ret = *this;
ret.axis_ = x;
return ret;
}
When executed in a graph, this op outputs its input tensor as-is.
When building ops to compute gradients, this op prevents the contribution of
its inputs to be taken into account. Normally, the gradient generator adds ops
to a graph to compute the derivatives of a specified 'loss' by recursively
finding out inputs that contributed to its computation. If you insert this op
in the graph it inputs are masked from the gradient generator. They are not
taken into account for computing gradients.
This is useful any time you want to compute a value with TensorFlow but need
to pretend that the value was a constant. Some examples include:
* The *EM* algorithm where the *M-step* should not involve backpropagation
through the output of the *E-step*.
* Contrastive divergence training of Boltzmann machines where, when
differentiating the energy function, the training must not backpropagate
through the graph that generated the samples from the model.
* Adversarial training, where no backprop should happen through the adversarial
example generation process.
Note, most python users will want to use the Python `Tensor.__getitem__`
or `Variable.__getitem__` rather than this op directly.
The goal of this op is to produce a new tensor with a subset of
the elements from the `n` dimensional `input` tensor. The subset is chosen using
a sequence of `m` sparse range specifications encoded into the arguments
of this function. Note, in some cases
`m` could be equal to `n`, but this need not be the case. Each
range specification entry can be one of the following:
An ellipsis (...). Ellipses are used to imply zero or more
dimensions of full-dimension selection and are produced using
`ellipsis_mask`. For example, `foo[...]` is the identity slice.
A new axis. This is used to insert a new shape=1 dimension and is
produced using `new_axis_mask`. For example, `foo[:, ...]` where
`foo` is shape `(3, 4)` produces a `(1, 3, 4)` tensor.
A range `begin:end:stride`. This is used to specify how much to choose from
a given dimension. `stride` can be any integer but 0. `begin` is an integer
which represents the index of the first value to select while `end` represents
the index of the last value to select. The number of values selected in each
dimension is `end - begin` if `stride > 0` and `begin - end` if `stride < 0`.
`begin` and `end` can be negative where `-1` is the last element, `-2` is
the second to last. `begin_mask` controls whether to replace the explicitly
given `begin` with an implicit effective value of `0` if `stride > 0` and
`-1` if `stride < 0`. `end_mask` is analogous but produces the number
required to create the largest open interval. For example, given a shape
`(3,)` tensor `foo[:]`, the effective `begin` and `end` are `0` and `3`. Do
not assume this is equivalent to `foo[0:-1]` which has an effective `begin`
and `end` of `0` and `2`. Another example is `foo[-2::-1]` which reverses the
first dimension of a tensor while dropping the last two (in the original
order elements). For example `foo = [1,2,3,4]; foo[-2::-1]` is `[4,3]`.
A single index. This is used to keep only elements that have a given
index. For example (`foo[2, :]` on a shape `(5,6)` tensor produces a
shape `(6,)` tensor. This is encoded in `begin` and `end` and
`shrink_axis_mask`.
Each conceptual range specification is encoded in the op's argument. This
encoding is best understand by considering a non-trivial example. In
particular,
`foo[1, 2:4, None, ..., :-3:-1, :]` will be encoded as
In this case if foo.shape is (5, 5, 5, 5, 5, 5) the final shape of
the slice becomes (2, 1, 5, 5, 2, 5).
Let us walk step by step through each argument specification.
1. The first argument in the example slice is turned into begin = 1 and
end = begin + 1 = 2. To disambiguate from the original spec 2:4 we
also set the appropriate bit in shrink_axis_mask.
2. 2:4 is contributes 2, 4, 1 to begin, end, and stride. All masks have
zero bits contributed.
3. None is a synonym for tf.newaxis. This means insert a dimension of size 1
dimension in the final shape. Dummy values are contributed to begin,
end and stride, while the new_axis_mask bit is set.
4. ... grab the full ranges from as many dimensions as needed to
fully specify a slice for every dimension of the input shape.
5. :-3:-1 shows the use of negative indices. A negative index i associated
with a dimension that has shape s is converted to a positive index
s + i. So -1 becomes s-1 (i.e. the last element). This conversion
is done internally so begin, end and strides receive x, -3, and -1.
The appropriate begin_mask bit is set to indicate the start range is the
full range (ignoring the x).
6. : indicates that the entire contents of the corresponding dimension
is selected. This is equivalent to :: or 0::1. begin, end, and strides
receive 0, 0, and 1, respectively. The appropriate bits in begin_mask and
end_mask are also set.
Requirements:
0 != strides[i] for i in [0, m)ellipsis_mask must be a power of two (only one ellipsis)
Arguments:
* scope: A Scope object
* begin: begin[k] specifies the offset into the kth range specification.
The exact dimension this corresponds to will be determined by context.
Out-of-bounds values will be silently clamped. If the kth bit of
begin_mask then begin[k] is ignored and the full range of the
appropriate dimension is used instead. Negative values causes indexing
to start from the highest element e.g. If foo==[1,2,3] then foo[-1]==3.
* end: end[i] is like begin with the exception that end_mask is
used to determine full ranges.
* strides: strides[i] specifies the increment in the ith specification
after extracting a given element. Negative indices will reverse
the original order. Out or range values are
clamped to [0,dim[i]) if slice[i]>0 or [-1,dim[i]-1] if slice[i] < 0
Optional attributes (see Attrs):
* begin_mask: a bitmask where a bit i being 1 means to ignore the begin
value and instead use the largest interval possible. At runtime
begin[i] will be replaced with [0, n-1) if stride[i] > 0 or
[-1, n-1] if stride[i] < 0
* end_mask: analogous to begin_mask
* ellipsis_mask: a bitmask where bit i being 1 means the ith
position is actually an ellipsis. One bit at most can be 1.
If ellipsis_mask == 0, then an implicit ellipsis mask of 1 << (m+1)
is provided. This means that foo[3:5] == foo[3:5, ...]. An ellipsis
implicitly creates as many range specifications as necessary to fully
specify the sliced range for every dimension. For example for a 4-dimensional
tensor foo the slice foo[2, ..., 5:8] implies foo[2, :, :, 5:8].
* new_axis_mask: a bitmask where bit i being 1 means the ith
specification creates a new shape 1 dimension. For example
foo[:4, tf.newaxis, :2] would produce a shape (4, 1, 2) tensor.
* shrink_axis_mask: a bitmask where bit i implies that the ith
specification should shrink the dimensionality. begin and end
must imply a slice of size 1 in the dimension. For example in
python one might do foo[:, 3, :] which would result in
shrink_axis_mask being 2.
Returns:
* Output: The output tensor. */
class StridedSlice {
public:
/// Optional attribute setters for StridedSlice
struct Attrs {
/** a bitmask where a bit i being 1 means to ignore the begin
value and instead use the largest interval possible. At runtime
begin[i] will be replaced with [0, n-1) if stride[i] > 0 or
[-1, n-1] if stride[i] < 0
Defaults to 0 */
TF_MUST_USE_RESULT Attrs BeginMask(int64 x) {
Attrs ret = *this;
ret.begin_mask_ = x;
return ret;
}
/** analogous to begin_mask
Defaults to 0 */
TF_MUST_USE_RESULT Attrs EndMask(int64 x) {
Attrs ret = *this;
ret.end_mask_ = x;
return ret;
}
/** a bitmask where bit i being 1 means the ith
position is actually an ellipsis. One bit at most can be 1.
If ellipsis_mask == 0, then an implicit ellipsis mask of 1 << (m+1)
is provided. This means that foo[3:5] == foo[3:5, ...]. An ellipsis
implicitly creates as many range specifications as necessary to fully
specify the sliced range for every dimension. For example for a 4-dimensional
tensor foo the slice foo[2, ..., 5:8] implies foo[2, :, :, 5:8].
Defaults to 0 */
TF_MUST_USE_RESULT Attrs EllipsisMask(int64 x) {
Attrs ret = *this;
ret.ellipsis_mask_ = x;
return ret;
}
/** a bitmask where bit i being 1 means the ith
specification creates a new shape 1 dimension. For example
foo[:4, tf.newaxis, :2] would produce a shape (4, 1, 2) tensor.
Defaults to 0 */
TF_MUST_USE_RESULT Attrs NewAxisMask(int64 x) {
Attrs ret = *this;
ret.new_axis_mask_ = x;
return ret;
}
/** a bitmask where bit i implies that the ith
specification should shrink the dimensionality. begin and end
must imply a slice of size 1 in the dimension. For example in
python one might do foo[:, 3, :] which would result in
shrink_axis_mask being 2.
Defaults to 0 */
TF_MUST_USE_RESULT Attrs ShrinkAxisMask(int64 x) {
Attrs ret = *this;
ret.shrink_axis_mask_ = x;
return ret;
}
/** Assignvalue to the sliced l-value reference of ref.
The values of value are assigned to the positions in the variable
ref that are selected by the slice parameters. The slice parameters
begin, end, strides, etc. work exactly as in StridedSlice.
NOTE this op currently does not support broadcasting and so value's
shape must be exactly the shape produced by the slice of ref.
Since StridedSlice cuts out pieces of its input which is size
shape, its gradient will have the same shape (which is passed here
as shape). The gradient will be zero in any element that the slice
does not select.
Arguments are the same as StridedSliceGrad with the exception that
dy is the input gradient to be propagated and shape is the
shape of StridedSlice's input.
/** Adds sparse updates to an existing tensor according to indices.
This operation creates a new tensor by adding sparse updates to the passed
in tensor.
This operation is very similar to tf.scatter_nd_add, except that the updates
are added onto an existing tensor (as opposed to a variable). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
indices is an integer tensor containing indices into a new tensor of shape
shape. The last dimension of indices can be at most the rank of shape:
indices.shape[-1] <= shape.rank
The last dimension of indices corresponds to indices into elements
(if indices.shape[-1] = shape.rank) or slices
(if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of
shape. updates is a tensor with shape
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of tensor_scatter_add is to add individual elements to a
tensor by index. For example, say we want to add 4 elements in a rank-1
tensor with 8 elements.
In Python, this scatter add operation would look like this:
python indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_add(tensor, indices, updates) with tf.Session() as sess: print(sess.run(scatter))
The resulting tensor would look like this:
[1, 12, 1, 11, 10, 1, 1, 13]
We can also, insert entire slices of a higher rank tensor all at once. For
example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values.
In Python, this scatter add operation would look like this:
/** Subtracts sparse `updates` from an existing tensor according to `indices`.
This operation creates a new tensor by subtracting sparse `updates` from the
passed in `tensor`.
This operation is very similar to `tf.scatter_nd_sub`, except that the updates
are subtracted from an existing tensor (as opposed to a variable). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
`indices` is an integer tensor containing indices into a new tensor of shape
`shape`. The last dimension of `indices` can be at most the rank of `shape`:
indices.shape[-1] <= shape.rank
The last dimension of `indices` corresponds to indices into elements
(if `indices.shape[-1] = shape.rank`) or slices
(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of
`shape`. `updates` is a tensor with shape
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of tensor_scatter_sub is to subtract individual elements
from a tensor by index. For example, say we want to insert 4 scattered elements
in a rank-1 tensor with 8 elements.
In Python, this scatter subtract operation would look like this:
python indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_sub(tensor, indices, updates) with tf.Session() as sess: print(sess.run(scatter))
The resulting tensor would look like this:
[1, -10, 1, -9, -8, 1, 1, -11]
We can also, insert entire slices of a higher rank tensor all at once. For
example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values.
In Python, this scatter add operation would look like this:
/** Scatter `updates` into an existing tensor according to `indices`.
This operation creates a new tensor by applying sparse `updates` to the passed
in `tensor`.
This operation is very similar to `tf.scatter_nd`, except that the updates are
scattered onto an existing tensor (as opposed to a zero-tensor). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
If `indices` contains duplicates, then their updates are accumulated (summed).
**WARNING**: The order in which updates are applied is nondeterministic, so the
output will be nondeterministic if `indices` contains duplicates -- because
of some numerical approximation issues, numbers summed in different order
may yield different results.
`indices` is an integer tensor containing indices into a new tensor of shape
`shape`. The last dimension of `indices` can be at most the rank of `shape`:
indices.shape[-1] <= shape.rank
The last dimension of `indices` corresponds to indices into elements
(if `indices.shape[-1] = shape.rank`) or slices
(if `indices.shape[-1] < shape.rank`) along dimension `indices.shape[-1]` of
`shape`. `updates` is a tensor with shape
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of scatter is to insert individual elements in a tensor by
index. For example, say we want to insert 4 scattered elements in a rank-1
tensor with 8 elements.
In Python, this scatter operation would look like this:
python indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_update(tensor, indices, updates) with tf.Session() as sess: print(sess.run(scatter))
The resulting tensor would look like this:
[1, 11, 1, 10, 9, 1, 1, 12]
We can also, insert entire slices of a higher rank tensor all at once. For
example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values.
In Python, this scatter operation would look like this:
/** Assign `value` to the sliced l-value reference of `input`.
The values of `value` are assigned to the positions in the tensor `input` that
are selected by the slice parameters. The slice parameters `begin` `end`
`strides` etc. work exactly as in `StridedSlice`.
NOTE this op currently does not support broadcasting and so `value`'s shape
must be exactly the shape produced by the slice of `input`.
This operation creates a new tensor by replicating `input` `multiples` times.
The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,
and the values of `input` are replicated `multiples[i]` times along the 'i'th
dimension. For example, tiling `[a b c d]` by `[2]` produces
`[a b c d a b c d]`.
Arguments:
* scope: A Scope object
* input: 1-D or higher.
* multiples: 1-D. Length must be the same as the number of dimensions in `input`
This operation returns a tensor `y` containing all of the unique elements of `x`
sorted in the same order that they occur in `x`. This operation also returns a
tensor `idx` the same size as `x` that contains the index of each value of `x`
in the unique output `y`. In other words:
`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
/** Finds unique elements along an axis of a tensor.
This operation either returns a tensor `y` containing unique elements
along the `axis` of a tensor. The returned unique elements is sorted
in the same order as they occur along `axis` in `x`.
This operation also returns a tensor `idx` that is the same size as
the number of the elements in `x` along the `axis` dimension. It
contains the index in the unique output `y`.
In other words, for an `1-D` tensor `x` with `axis = None:
`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
y, idx = unique(x, axis=0) y ==> [[1, 0, 0], [2, 0, 0]] idx ==> [0, 0, 1]
For an `2-D` tensor `x` with `axis = 1`:
tensor 'x' is [[1, 0, 0],
[1, 0, 0],
[2, 0, 0]]
y, idx = unique(x, axis=1) y ==> [[1, 0], [1, 0], [2, 0]] idx ==> [0, 1, 1]
Arguments:
* scope: A Scope object
* x: A `Tensor`.
* axis: A `Tensor` of type `int32` (default: None). The axis of the Tensor to
find the unique elements.
Returns:
* `Output` y: A `Tensor`. Unique elements along the `axis` of `Tensor` x.
* `Output` idx: A 1-D Tensor. Has the same type as x that contains the index of each
value of x in the output y. */
class UniqueV2 {
public:
/// Optional attribute setters for UniqueV2
struct Attrs {
/// Defaults to DT_INT32
TF_MUST_USE_RESULT Attrs OutIdx(DataType x) {
Attrs ret = *this;
ret.out_idx_ = x;
return ret;
}
This operation returns a tensor `y` containing all of the unique elements of `x`
sorted in the same order that they occur in `x`. This operation also returns a
tensor `idx` the same size as `x` that contains the index of each value of `x`
in the unique output `y`. Finally, it returns a third tensor `count` that
contains the count of each element of `y` in `x`. In other words:
`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
/** Finds unique elements along an axis of a tensor.
This operation either returns a tensor `y` containing unique elements
along the `axis` of a tensor. The returned unique elements is sorted
in the same order as they occur along `axis` in `x`.
This operation also returns a tensor `idx` and a tensor `count`
that are the same size as the number of the elements in `x` along the
`axis` dimension. The `idx` contains the index in the unique output `y`
and the `count` contains the count in the unique output `y`.
In other words, for an `1-D` tensor `x` with `axis = None:
`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
Arguments:
* scope: A Scope object
* x: A `Tensor`.
* axis: A `Tensor` of type `int32` (default: None). The axis of the Tensor to
find the unique elements.
Returns:
* `Output` y: A `Tensor`. Unique elements along the `axis` of `Tensor` x.
* `Output` idx: A 1-D Tensor. Has the same type as x that contains the index of each
value of x in the output y.
* `Output` count: A 1-D Tensor. The count of each value of x in the output y. */
class UniqueWithCountsV2 {
public:
/// Optional attribute setters for UniqueWithCountsV2
struct Attrs {
/// Defaults to DT_INT32
TF_MUST_USE_RESULT Attrs OutIdx(DataType x) {
Attrs ret = *this;
ret.out_idx_ = x;
return ret;
}
/** Unpacks a given dimension of a rank-`R` tensor into `num` rank-`(R-1)` tensors.
Unpacks `num` tensors from `value` by chipping it along the `axis` dimension.
For example, given a tensor of shape `(A, B, C, D)`;
If `axis == 0` then the i'th tensor in `output` is the slice `value[i, :, :, :]`
and each tensor in `output` will have shape `(B, C, D)`. (Note that the
dimension unpacked along is gone, unlike `split`).
If `axis == 1` then the i'th tensor in `output` is the slice `value[:, i, :, :]`
and each tensor in `output` will have shape `(A, C, D)`.
Etc.
This is the opposite of `pack`.
Arguments:
* scope: A Scope object
* value: 1-D or higher, with `axis` dimension size equal to `num`.
Optional attributes (see `Attrs`):
* axis: Dimension along which to unpack. Negative values wrap around, so the
valid range is `[-R, R)`.
Returns:
* `OutputList`: The list of tensors unpacked from `value`. */
class Unstack {
public:
/// Optional attribute setters for Unstack
struct Attrs {
/** Dimension along which to unpack. Negative values wrap around, so the
valid range is `[-R, R)`.
Defaults to 0 */
TF_MUST_USE_RESULT Attrs Axis(int64 x) {
Attrs ret = *this;
ret.axis_ = x;
return ret;
}
/** Converts an array of flat indices into a tuple of coordinate arrays.
Example:
y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3])
'dims' represent a hypothetical (3, 3) tensor of indices:
[[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]
For each entry from 'indices', this operation returns
its coordinates (marked with '*'), such as
2 ==> (0, 2)
5 ==> (1, 2)
7 ==> (2, 1)
y ==> [[0, 1, 2], [2, 2, 1]]
(numpy)
Equivalent to np.unravel_index
Arguments:
* scope: A Scope object
* indices: An 0-D or 1-D `int` Tensor whose elements are indices into the
flattened version of an array of dimensions dims.
* dims: An 1-D `int` Tensor. The shape of the array to use for unraveling
indices.
Returns:
* `Output`: An 2-D (or 1-D if indices is 0-D) tensor where each row has the
same shape as the indices array. */
class UnravelIndex {
public:
UnravelIndex(const tensorflow::Scope& scope, tensorflow::Input indices,
tensorflow::Input dims);
operator ::tensorflow::Output() const { return output; }
operator ::tensorflow::Input() const { return output; }
::tensorflow::Node* node() const { return output.node(); }
/** Returns locations of nonzero / true values in a tensor.
This operation returns the coordinates of true elements in `condition`. The
coordinates are returned in a 2-D tensor where the first dimension (rows)
represents the number of true elements, and the second dimension (columns)
represents the coordinates of the true elements. Keep in mind, the shape of
the output tensor can vary depending on how many true values there are in
`condition`. Indices are output in row-major order.
For example:
'input' tensor is [[True, False]
[True, False]]
'input' has two true values, so output has two coordinates.
'input' has rank of 2, so coordinates have two indices.
where(input) ==> [[0, 0], [1, 0]]
condition tensor is [[[True, False]
[True, False]]
[[False, True]
[False, True]]
[[False, False]
[False, True]]]
'input' has 5 true values, so output has 5 coordinates.
'input' has rank of 3, so coordinates have three indices.