tf.experimental.dtensor.create_distributed_mesh

Creates a distributed mesh.

This is similar to create_mesh, but with a different set of arguments to create a mesh that spans evenly across a multi-client DTensor cluster.

For CPU and GPU meshes, users can choose to use fewer local devices than what is available local_devices.

For TPU, only meshes that uses all TPU cores is supported by the DTensor runtime.

mesh_dims A list of (dim_name, dim_size) tuples.
mesh_name Name of the created mesh. Defaults to ''.
local_devices String representations of devices to use. This is the device part of tf.DeviceSpec, e.g. 'CPU:0'. Defaults to all available local logical devices.
device_type Type of device to build the mesh for. Defaults to 'CPU'. Supported values are 'CPU', 'GPU', 'TPU'.6
use_xla_spmd Boolean when True, will use XLA SPMD instead of DTensor SPMD.

A mesh that spans evenly across all DTensor clients in the cluster.