tf.experimental.numpy: NumPy API on TensorFlow.
This module provides a subset of NumPy API, built on top of TensorFlow operations. APIs are based on and have been tested with NumPy 1.16 version.
The set of supported APIs may be expanded over time. Also future releases may change the baseline version of NumPy API being supported. A list of some systematic differences with NumPy is listed later in the "Differences with NumPy" section.
Please also see TensorFlow NumPy Guide.
In the code snippets below, we will assume that
tnp and NumPy is imported as
print(tnp.ones([2,1]) + np.ones([1, 2]))
The module provides an
ndarray class which wraps an immutable
Additional functions are provided which accept array-like objects. Here
array-like objects include
ndarrays as defined by this module, as well as
tf.Tensor, in addition to types accepted by NumPy.
A subset of NumPy dtypes are supported. Type promotion follows NumPy semantics.
print(tnp.ones([1, 2], dtype=tnp.int16) + tnp.ones([2, 1], dtype=tnp.uint8))
ndarray class implements the
__array__ interface. This should allow
these objects to be passed into contexts that expect a NumPy or array-like
object (e.g. matplotlib).
np.sum(tnp.ones([1, 2]) + np.ones([2, 1]))
The TF-NumPy API calls can be interleaved with TensorFlow calls
without incurring Tensor data copies. This is true even if the
tf.Tensor is placed on a non-CPU device.
In general, the expected behavior should be on par with that of code involving
tf.Tensor and running stateless TensorFlow functions on them.
tnp.sum(tnp.ones([1, 2]) + tf.ones([2, 1]))
Note that the
__array_priority__ is currently chosen to be lower than
tf.Tensor. Hence the
+ operator above returns a
Additional examples of interoperability include:
with tf.GradientTape()scope to compute gradients through the TF-NumPy API calls.
tf.distribution.Strategyscope for distributed execution
tf.vectorized_map()for speeding up code using auto-vectorization
ndarray and functions wrap TensorFlow constructs, the code will
have GPU and TPU support on par with TensorFlow. Device placement can be
controlled by using
with tf.device scopes. Note that these devices could
be local or remote.
with tf.device("GPU:0"): x = tnp.ones([1, 2]) print(tf.convert_to_tensor(x).device)
Graph and Eager Modes
Eager mode execution should typically match NumPy semantics of executing
op-by-op. However the same code can be executed in graph mode, by putting it
tf.function. The function body can contain NumPy code, and the inputs
ndarray as well.
@tf.function def f(x, y): return tnp.sum(x + y) f(tnp.ones([1, 2]), tf.ones([2, 1]))
However, note that graph mode execution can change behavior of certain operations since symbolic execution may not have information that is computed during runtime. Some differences are:
- Shapes can be incomplete or unknown in graph mode. This means that
ndarrayobjects instead of returning integer (or tuple of integer) values.
ndarraymay similarly not be supported in graph mode. Code using these may need to change to explicit shape operations or control flow constructs.
- Also note the autograph limitations.
Mutation and Variables
ndarrays currently wrap immutable
tf.Tensor. Hence mutation
operations like slice assigns are not supported. This may change in the future.
Note however that one can directly construct a
tf.Variable and use that with
the TF-NumPy APIs.
tf_var = tf.Variable(2.0) tf_var.assign_add(tnp.square(tf_var))
Differences with NumPy
Here is a non-exhaustive list of differences:
- Not all dtypes are currently supported. e.g.
np.recarraytypes are not supported.
ndarraystorage is in C order only. Fortran order, views,
stride_tricksare not supported.
- Only a subset of functions and modules are supported. This set will be
expanded over time. For supported functions, some arguments or argument
values may not be supported. These differences are generally provided in the
function comments. Full
ufuncsupport is also not provided.
- Buffer mutation is currently not supported.
ndarrayswrap immutable tensors. This means that output buffer arguments (e.g.
outin ufuncs) are not supported.
- NumPy C API is not supported. NumPy's Cython and Swig integration are not supported.
random module: Public API for tf.experimental.numpy.random namespace.
class bool_: Boolean type (True or False), stored as a byte.
class complex128: Complex number type composed of two double-precision floating-point
class complex64: Complex number type composed of two single-precision floating-point
class complex_: Complex number type composed of two double-precision floating-point
class float16: Half-precision floating-point number type.