Module: tf.experimental.numpy

tf.experimental.numpy: NumPy API on TensorFlow.

This module provides a subset of NumPy API, built on top of TensorFlow operations. APIs are based on and have been tested with NumPy 1.16 version.

The set of supported APIs may be expanded over time. Also future releases may change the baseline version of NumPy API being supported. A list of some systematic differences with NumPy are listed later in the "Differences with NumPy" section.

Getting Started

Please also see TensorFlow NumPy Guide.

In the code snippets below, we will assume that tf.experimental.numpy is imported as tnp and NumPy is imported as np

print(tnp.ones([2,1]) + tnp.ones([1, 2]))

Types

The module provides an ndarray class which wraps an immutable tf.Tensor. Additional functions are provided which accept array-like objects. Here array-like objects includes ndarrays as defined by this module, as well as tf.Tensor, in addition to types accepted by NumPy.

A subset of NumPy dtypes are supported. Type promotion follows NumPy semantics.

print(tnp.ones([1, 2], dtype=tnp.int16) + tnp.ones([2, 1], dtype=tnp.uint8))

Array Interface

The ndarray class implements the __array__ interface. This should allow these objects to be passed into contexts that expect a NumPy or array-like object (e.g. matplotlib).

np.sum(tnp.ones([1, 2]) + np.ones([2, 1]))

TF Interoperability

The TF-NumPy API calls can be interleaved with TensorFlow calls without incurring Tensor data copies. This is true even if the ndarray or tf.Tensor is placed on a non-CPU device.

In general, the expected behavior should be on par with that of code involving tf.Tensor and running stateless TensorFlow functions on them.

tnp.sum(tnp.ones([1, 2]) + tf.ones([2, 1]))

Note that the __array_priority__ is currently chosen to be lower than tf.Tensor. Hence the + operator above returns a tf.Tensor.

Additional examples of interopability include:

  • using with tf.GradientTape() scope to compute gradients through the TF-NumPy API calls.
  • using tf.distribution.Strategy scope for distributed execution
  • using tf.vectorized_map() for speeding up code using auto-vectorization

Device Support

Given that ndarray and functions wrap TensorFlow constructs, the code will have GPU and TPU support on par with TensorFlow. Device placement can be controlled by using with tf.device scopes. Note that these devices could be local or remote.

with tf.device("GPU:0"):
  x = tnp.ones([1, 2])
print(tf.convert_to_tensor(x).device)

Graph and Eager Modes

Eager mode execution should typically match NumPy semantics of executing op-by-op. However the same code can be executed in graph mode, by putting it inside a tf.fun