tf.distribute.InputOptions
Stay organized with collections
Save and categorize content based on your preferences.
Run options for experimental_distribute_dataset(s_from_function)
.
tf.distribute.InputOptions(
experimental_prefetch_to_device=True,
experimental_replication_mode=tf.distribute.InputReplicationMode.PER_WORKER,
experimental_place_dataset_on_device=False
)
This can be used to hold some strategy specific configs.
# Setup TPUStrategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
dataset = tf.data.Dataset.range(16)
distributed_dataset_on_host = (
strategy.experimental_distribute_dataset(
dataset,
tf.distribute.InputOptions(
experimental_replication_mode=
experimental_replication_mode.PER_WORKER,
experimental_place_dataset_on_device=False)))
Attributes |
experimental_prefetch_to_device
|
Boolean. Defaults to True. If True, dataset
elements will be prefetched to accelerator device memory. When False,
dataset elements are prefetched to host device memory. Must be False when
using TPUEmbedding API. experimental_prefetch_to_device can only be used
with experimental_replication_mode=PER_WORKER
|
experimental_replication_mode
|
Replication mode for the input function.
Currently, the InputReplicationMode.PER_REPLICA is only supported with
tf.distribute.MirroredStrategy.
experimental_distribute_datasets_from_function.
The default value is InputReplicationMode.PER_WORKER.
|
experimental_place_dataset_on_device
|
Boolean. Default to False. When True,
dataset will be placed on the device, otherwise it will remain on the
host. experimental_place_dataset_on_device=True can only be used with
experimental_replication_mode=PER_REPLICA
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-02-18 UTC.
[null,null,["Last updated 2021-02-18 UTC."],[],[],null,["# tf.distribute.InputOptions\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.4.0/tensorflow/python/distribute/distribute_lib.py#L623-L674) |\n\nRun options for `experimental_distribute_dataset(s_from_function)`. \n\n tf.distribute.InputOptions(\n experimental_prefetch_to_device=True,\n experimental_replication_mode=tf.distribute.InputReplicationMode.PER_WORKER,\n experimental_place_dataset_on_device=False\n )\n\nThis can be used to hold some strategy specific configs. \n\n # Setup TPUStrategy\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n tf.config.experimental_connect_to_cluster(resolver)\n tf.tpu.experimental.initialize_tpu_system(resolver)\n strategy = tf.distribute.TPUStrategy(resolver)\n\n dataset = tf.data.Dataset.range(16)\n distributed_dataset_on_host = (\n strategy.experimental_distribute_dataset(\n dataset,\n tf.distribute.InputOptions(\n experimental_replication_mode=\n experimental_replication_mode.PER_WORKER,\n experimental_place_dataset_on_device=False)))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `experimental_prefetch_to_device` | Boolean. Defaults to True. If True, dataset elements will be prefetched to accelerator device memory. When False, dataset elements are prefetched to host device memory. Must be False when using TPUEmbedding API. experimental_prefetch_to_device can only be used with experimental_replication_mode=PER_WORKER |\n| `experimental_replication_mode` | Replication mode for the input function. Currently, the InputReplicationMode.PER_REPLICA is only supported with tf.distribute.MirroredStrategy. experimental_distribute_datasets_from_function. The default value is InputReplicationMode.PER_WORKER. |\n| `experimental_place_dataset_on_device` | Boolean. Default to False. When True, dataset will be placed on the device, otherwise it will remain on the host. experimental_place_dataset_on_device=True can only be used with experimental_replication_mode=PER_REPLICA |\n\n\u003cbr /\u003e"]]