Creates a dataset which reads data from the service.

This is useful when the dataset is registered by one process, then used in another process. When the same process is both registering and reading from the dataset, it is simpler to use instead.

Before using from_dataset_id, the dataset must have been registered with the service using register_dataset returns a dataset id for the registered dataset. That is the dataset_id which should be passed to from_dataset_id.

The element_spec argument indicates the tf.TypeSpecs for the elements produced by the dataset. Currently element_spec must be explicitly specified, and match the dataset registered under dataset_id. element_spec defaults to None so that in the future we can support automatically discovering the element_spec by querying the service. is a convenience method which combines register_dataset and from_dataset_id into a dataset transformation. See the documentation for for more detail about how from_dataset_id works.

dispatcher =
dispatcher_address ="://")[1]
worker =
dataset =
dataset_id =, dataset)
dataset =
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

processing_mode A string specifying the policy for how data should be processed by workers. Can be either "parallel_epochs" to have each worker process a copy of the dataset, or "distributed_epoch" to split a single iteration of the dataset across all the workers.
service A string or a tuple indicating how to connect to the service. If it's a string, it should be in the format [<protocol>://]<address>, where <address> identifies the dispatcher address and <protocol> can optionally be used to override the default protocol to use. If it's a tuple, it should be (protocol, address).
dataset_id The id of the dataset to read from. This id is returned by register_dataset when the dataset is registered with the service.
element_spec A nested structure of tf.TypeSpecs representing the type of elements produced by the dataset. Use to see the element spec for a given dataset.
job_name (Optional.) The name of the job. This argument makes it possible for multiple datasets to share the same job. The default behavior is that the dataset creates anonymous, exclusively owned jobs.
consumer_index (Optional.) The index of the consumer in the range from 0 to num_consumers. Must be specified alongside num_consumers. When specified, consumers will read from the job in a strict round-robin order, instead of the default first-come-first-served order.
num_consumers (Optional.) The number of consumers which will consume from the job. Must be specified alongside consumer_index. When specified, consumers will read from the job in a strict round-robin order, instead of the default first-come-first-served order. When num_consumers is specified, the dataset must have infinite cardinality to prevent a producer from running out of data early and causing consumers to go out of sync.
max_outstanding_requests (Optional.) A limit on how many elements may be requested at the same time. You can use this option to control the amount of memory used, since distribute won't use more than element_size * max_outstanding_requests of memory.
data_transfer_protocol (Optional.) The protocol to use for transferring data with the service. By default, data is transferred using gRPC.
target_workers (Optional.) Which workers to read from. If "AUTO", runtime decides which workers to read from. If "ANY", reads from any service workers. If "LOCAL", only reads from local in-processs service workers. "AUTO" works well for most cases, while users can specify other targets. For example, "LOCAL" helps avoid RPCs and data copy if every TF worker colocates with a service worker. Defaults to "AUTO".

A which reads from the service.