|  View source on GitHub | 
Computes a device_assignment of a computation across a TPU topology.
tf.contrib.tpu.device_assignment(
    topology, computation_shape=None, computation_stride=None, num_replicas=1
)
Attempts to choose a compact grid of cores for locality.
Returns a DeviceAssignment that describes the cores in the topology assigned
to each core of each replica.
computation_shape and computation_stride values should be powers of 2 for
optimal packing.
| Args | |
|---|---|
| topology | A Topologyobject that describes the TPU cluster topology.
To obtain a TPU topology, evaluate theTensorreturned byinitialize_systemusingSession.run. Either a serializedTopologyProtoor aTopologyobject may be passed. Note: you must
evaluate theTensorfirst; you cannot pass an unevaluatedTensorhere. | 
| computation_shape | A rank 1 int32 numpy array with size equal to the
topology rank, describing the shape of the computation's block of cores.
If None, the computation_shapeis[1] * topology_rank. | 
| computation_stride | A rank 1 int32 numpy array of size topology_rank,
describing the inter-core spacing of thecomputation_shapecores in the
TPU topology. If None, thecomputation_strideis[1] * topology_rank. | 
| num_replicas | The number of computation replicas to run. The replicas will be packed into the free spaces of the topology. | 
| Returns | |
|---|---|
| A DeviceAssignment object, which describes the mapping between the logical cores in each computation replica and the physical cores in the TPU topology. | 
| Raises | |
|---|---|
| ValueError | If topologyis not a validTopologyobject. | 
| ValueError | If computation_shapeorcomputation_strideare not 1D int32
numpy arrays with shape [3] where all values are positive. | 
| ValueError | If computation's replicas cannot fit into the TPU topology. |