View source on GitHub |
A replacement for tf.Variable which follows initial value placement.
Inherits From: Variable
tf.experimental.dtensor.DVariable(
initial_value, *args, dtype=None, **kwargs
)
Used in the notebooks
Used in the guide | Used in the tutorials |
---|---|
The class also handles restore/save operations in DTensor. Note that,
DVariable may fall back to normal tf.Variable at this moment if
initial_value
is not a DTensor.
Child Classes
Methods
assign
assign(
value, use_locking=None, name=None, read_value=True
)
Assigns a new value to this variable.
Args | |
---|---|
value
|
A Tensor . The new value for this variable.
|
use_locking
|
If True , use locking during the assignment.
|
name
|
The name to use for the assignment. |
read_value
|
A bool . Whether to read and return the new value of the
variable or not.
|
Returns | |
---|---|
If read_value is True , this method will return the new value of the
variable after the assignment has completed. Otherwise, when in graph mode
it will return the Operation that does the assignment, and when in eager
mode it will return None .
|
assign_add
assign_add(
delta, use_locking=None, name=None, read_value=True
)
Adds a value to this variable.
Args | |
---|---|
delta
|
A Tensor . The value to add to this variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
The name to use for the operation. |
read_value
|
A bool . Whether to read and return the new value of the
variable or not.
|
Returns | |
---|---|
If read_value is True , this method will return the new value of the
variable after the assignment has completed. Otherwise, when in graph mode
it will return the Operation that does the assignment, and when in eager
mode it will return None .
|
assign_sub
assign_sub(
delta, use_locking=None, name=None, read_value=True
)
Subtracts a value from this variable.
Args | |
---|---|
delta
|
A Tensor . The value to subtract from this variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
The name to use for the operation. |
read_value
|
A bool . Whether to read and return the new value of the
variable or not.
|
Returns | |
---|---|
If read_value is True , this method will return the new value of the
variable after the assignment has completed. Otherwise, when in graph mode
it will return the Operation that does the assignment, and when in eager
mode it will return None .
|
batch_scatter_update
batch_scatter_update(
sparse_delta, use_locking=False, name=None
)
Assigns tf.IndexedSlices
to this variable batch-wise.
Analogous to batch_gather
. This assumes that this variable and the
sparse_delta IndexedSlices have a series of leading dimensions that are the
same for all of them, and the updates are performed on the last dimension of
indices. In other words, the dimensions should be the following:
num_prefix_dims = sparse_delta.indices.ndims - 1
batch_dim = num_prefix_dims + 1
sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[
batch_dim:]
where
sparse_delta.updates.shape[:num_prefix_dims]
== sparse_delta.indices.shape[:num_prefix_dims]
== var.shape[:num_prefix_dims]
And the operation performed can be expressed as:
var[i_1, ..., i_n,
sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[
i_1, ..., i_n, j]
When sparse_delta.indices is a 1D tensor, this operation is equivalent to
scatter_update
.
To avoid this operation one can looping over the first ndims
of the
variable and using scatter_update
on the subtensors that result of slicing
the first dimension. This is a valid option for ndims = 1
, but less
efficient than this implementation.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to be assigned to this variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
count_up_to
count_up_to(
limit
)
Increments this variable until it reaches limit
. (deprecated)
When that Op is run it tries to increment the variable by 1
. If
incrementing the variable would bring it above limit
then the Op raises
the exception OutOfRangeError
.
If no error is raised, the Op outputs the value of the variable before the increment.
This is essentially a shortcut for count_up_to(self, limit)
.
Args | |
---|---|
limit
|
value at which incrementing the variable raises an error. |
Returns | |
---|---|
A Tensor that will hold the variable value before the increment. If no
other Op modifies this variable, the values produced will all be
distinct.
|
eval
eval(
session=None
)
Evaluates and returns the value of this variable.
experimental_ref
experimental_ref()
DEPRECATED FUNCTION
from_proto
@staticmethod
from_proto( variable_def, import_scope=None )
Returns a Variable
object created from variable_def
.
gather_nd
gather_nd(
indices, name=None
)
Reads the value of this variable sparsely, using gather_nd
.
get_shape
get_shape() -> tf.TensorShape
Alias of Variable.shape
.
initialized_value
initialized_value()
Returns the value of the initialized variable. (deprecated)
You should use this instead of the variable itself to initialize another variable with a value that depends on the value of this variable.
# Initialize 'v' with a random tensor.
v = tf.Variable(tf.random.truncated_normal([10, 40]))
# Use `initialized_value` to guarantee that `v` has been
# initialized before its value is used to initialize `w`.
# The random values are picked only once.
w = tf.Variable(v.initialized_value() * 2.0)
Returns | |
---|---|
A Tensor holding the value of this variable after its initializer
has run.
|
is_initialized
is_initialized(
name=None
)
Checks whether a resource variable has been initialized.
Outputs boolean scalar indicating whether the tensor has been initialized.
Args | |
---|---|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
load
load(
value, session=None
)
Load new value into this variable. (deprecated)
Writes new value to variable's memory. Doesn't add ops to the graph.
This convenience method requires a session where the graph
containing this variable has been launched. If no session is
passed, the default session is used. See tf.compat.v1.Session
for more
information on launching a graph and on sessions.
v = tf.Variable([1, 2])
init = tf.compat.v1.global_variables_initializer()
with tf.compat.v1.Session() as sess:
sess.run(init)
# Usage passing the session explicitly.
v.load([2, 3], sess)
print(v.eval(sess)) # prints [2 3]
# Usage with the default session. The 'with' block
# above makes 'sess' the default session.
v.load([3, 4], sess)
print(v.eval()) # prints [3 4]
Args | |
---|---|
value
|
New variable value |
session
|
The session to use to evaluate this variable. If none, the default session is used. |
Raises | |
---|---|
ValueError
|
Session is not passed and no default session |
numpy
numpy()
read_value
read_value()
Constructs an op which reads the value of this variable.
Should be used when there are multiple reads, or when it is desirable to read the value only after some condition is true.
Returns | |
---|---|
The value of the variable. |
read_value_no_copy
read_value_no_copy()
Constructs an op which reads the value of this variable without copy.
The variable is read without making a copy even when it has been sparsely accessed. Variables in copy-on-read mode will be converted to copy-on-write mode.
Returns | |
---|---|
The value of the variable. |
ref
ref()
Returns a hashable reference object to this Variable.
The primary use case for this API is to put variables in a set/dictionary.
We can't put variables in a set/dictionary as variable.__hash__()
is no
longer available starting Tensorflow 2.0.
The following will raise an exception starting 2.0
x = tf.Variable(5)
y = tf.Variable(10)
z = tf.Variable(10)
variable_set = {x, y, z}
Traceback (most recent call last):
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.
variable_dict = {x: 'five', y: 'ten'}
Traceback (most recent call last):
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.
Instead, we can use variable.ref()
.
variable_set = {x.ref(), y.ref(), z.ref()}
x.ref() in variable_set
True
variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'}
variable_dict[y.ref()]
'ten'
Also, the reference object provides .deref()
function that returns the
original Variable.
x = tf.Variable(5)
x.ref().deref()
<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=5>
scatter_add
scatter_add(
sparse_delta, use_locking=False, name=None
)
Adds tf.IndexedSlices
to this variable.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to be added to this variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
scatter_div
scatter_div(
sparse_delta, use_locking=False, name=None
)
Divide this variable by tf.IndexedSlices
.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to divide this variable by.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
scatter_max
scatter_max(
sparse_delta, use_locking=False, name=None
)
Updates this variable with the max of tf.IndexedSlices
and itself.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to use as an argument of max with this
variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
scatter_min
scatter_min(
sparse_delta, use_locking=False, name=None
)
Updates this variable with the min of tf.IndexedSlices
and itself.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to use as an argument of min with this
variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
scatter_mul
scatter_mul(
sparse_delta, use_locking=False, name=None
)
Multiply this variable by tf.IndexedSlices
.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to multiply this variable by.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
scatter_nd_add
scatter_nd_add(
indices, updates, name=None
)
Applies sparse addition to individual values or slices in a Variable.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
add = ref.scatter_nd_add(indices, updates)
with tf.compat.v1.Session() as sess:
print sess.run(add)
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd
for more details about how to make updates to
slices.
Args | |
---|---|
indices
|
The indices to be used in the operation. |
updates
|
The values to be used in the operation. |
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
scatter_nd_max
scatter_nd_max(
indices, updates, name=None
)
Updates this variable with the max of tf.IndexedSlices
and itself.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
See tf.scatter_nd
for more details about how to make updates to
slices.
Args | |
---|---|
indices
|
The indices to be used in the operation. |
updates
|
The values to be used in the operation. |
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
scatter_nd_min
scatter_nd_min(
indices, updates, name=None
)
Updates this variable with the min of tf.IndexedSlices
and itself.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
See tf.scatter_nd
for more details about how to make updates to
slices.
Args | |
---|---|
indices
|
The indices to be used in the operation. |
updates
|
The values to be used in the operation. |
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
scatter_nd_sub
scatter_nd_sub(
indices, updates, name=None
)
Applies sparse subtraction to individual values or slices in a Variable.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
op = ref.scatter_nd_sub(indices, updates)
with tf.compat.v1.Session() as sess:
print sess.run(op)
The resulting update to ref would look like this:
[1, -9, 3, -6, -6, 6, 7, -4]
See tf.scatter_nd
for more details about how to make updates to
slices.
Args | |
---|---|
indices
|
The indices to be used in the operation. |
updates
|
The values to be used in the operation. |
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
scatter_nd_update
scatter_nd_update(
indices, updates, name=None
)
Applies sparse assignment to individual values or slices in a Variable.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
op = ref.scatter_nd_update(indices, updates)
with tf.compat.v1.Session() as sess:
print sess.run(op)
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See tf.scatter_nd
for more details about how to make updates to
slices.
Args | |
---|---|
indices
|
The indices to be used in the operation. |
updates
|
The values to be used in the operation. |
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
scatter_sub
scatter_sub(
sparse_delta, use_locking=False, name=None
)
Subtracts tf.IndexedSlices
from this variable.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to be subtracted from this variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
scatter_update
scatter_update(
sparse_delta, use_locking=False, name=None
)
Assigns tf.IndexedSlices
to this variable.
Args | |
---|---|
sparse_delta
|
tf.IndexedSlices to be assigned to this variable.
|
use_locking
|
If True , use locking during the operation.
|
name
|
the name of the operation. |
Returns | |
---|---|
The updated variable. |
Raises | |
---|---|
TypeError
|
if sparse_delta is not an IndexedSlices .
|
set_shape
set_shape(
shape
)
Overrides the shape for this variable.
Args | |
---|---|
shape
|
the TensorShape representing the overridden shape.
|
sparse_read
sparse_read(
indices, name=None
)
Reads the value of this variable sparsely, using gather
.
to_proto
to_proto(
export_scope=None
)
Converts a ResourceVariable
to a VariableDef
protocol buffer.
Args | |
---|---|
export_scope
|
Optional string . Name scope to remove.
|
Raises | |
---|---|
RuntimeError
|
If run in EAGER mode. |
Returns | |
---|---|
A VariableDef protocol buffer, or None if the Variable is not
in the specified name scope.
|
value
value()
A cached operation which reads the value of this variable.
__abs__
__abs__(
name=None
)
__add__
__add__(
y
)
__and__
__and__(
y
)
__array__
__array__(
dtype=None
)
Allows direct conversion to a numpy array.
np.array(tf.Variable([1.0]))
array([1.], dtype=float32)
Returns | |
---|---|
The variable value as a numpy array. |
__bool__
__bool__()
__div__
__div__(
y
)
__eq__
__eq__(
other
)
Compares two variables element-wise for equality.
__floordiv__
__floordiv__(
y
)
__ge__
__ge__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x >= y) element-wise.
Example:
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__getitem__
__getitem__(
slice_spec
)
Creates a slice helper object given a variable.
This allows creating a sub-tensor from part of the current contents
of a variable. See tf.Tensor.getitem
for detailed examples
of slicing.
This function in addition also allows assignment to a sliced range.
This is similar to __setitem__
functionality in Python. However,
the syntax is different so that the user can capture the assignment
operation for grouping or passing to sess.run()
in TF1.
For example,
import tensorflow as tf
A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32)
print(A[:2, :2]) # => [[1,2], [4,5]]
A[:2,:2].assign(22. * tf.ones((2, 2))))
print(A) # => [[22, 22, 3], [22, 22, 6], [7,8,9]]
Note that assignments currently do not support NumPy broadcasting semantics.
Args | |
---|---|
var
|
An ops.Variable object.
|
slice_spec
|
The arguments to Tensor.getitem .
|
Returns | |
---|---|
The appropriate slice of "tensor", based on "slice_spec".
As an operator. The operator also has a assign() method
that can be used to generate an assignment operator.
|
Raises | |
---|---|
ValueError
|
If a slice range is negative size. |
TypeError
|
TypeError: If the slice indices aren't int, slice, ellipsis, tf.newaxis or int32/int64 tensors. |
__gt__
__gt__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x > y) element-wise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__invert__
__invert__(
name=None
)
__iter__
__iter__()
When executing eagerly, iterates over the value of the variable.
__le__
__le__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x <= y) element-wise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__lt__
__lt__(
y: Annotated[Any, tf.raw_ops.Any
],
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Returns the truth value of (x < y) element-wise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .
|
y
|
A Tensor . Must have the same type as x .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor of type bool .
|
__matmul__
__matmul__(
y
)
__mod__
__mod__(
y
)
__mul__
__mul__(
y
)
__ne__
__ne__(
other
)
Compares two variables element-wise for equality.
__neg__
__neg__(
name=None
) -> Annotated[Any, tf.raw_ops.Any
]
Computes numerical negative value element-wise.
I.e., \(y = -x\).
Args | |
---|---|
x
|
A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , int8 , int16 , int32 , int64 , complex64 , complex128 .
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as x .
|
__nonzero__
__nonzero__()
__or__
__or__(
y
)
__pow__
__pow__(
y
)
__radd__
__radd__(
x
)
__rand__
__rand__(
x
)
__rdiv__
__rdiv__(
x
)
__rfloordiv__
__rfloordiv__(
x
)
__rmatmul__
__rmatmul__(
x
)
__rmod__
__rmod__(
x
)
__rmul__
__rmul__(
x
)
__ror__
__ror__(
x
)
__rpow__
__rpow__(
x
)
__rsub__
__rsub__(
x
)
__rtruediv__
__rtruediv__(
x
)
__rxor__
__rxor__(
x
)
__sub__
__sub__(
y
)
__truediv__
__truediv__(
y
)
__xor__
__xor__(
y
)