tf.data.experimental.DistributeOptions
Stay organized with collections
Save and categorize content based on your preferences.
Represents options for distributed data processing.
tf.data.experimental.DistributeOptions()
You can set the distribution options of a dataset through the
experimental_distribute
property of tf.data.Options
; the property is
an instance of tf.data.experimental.DistributeOptions
.
options = tf.data.Options()
options.experimental_distribute.auto_shard = False
dataset = dataset.with_options(options)
Attributes |
auto_shard
|
Whether the dataset should be automatically sharded when processedin a distributed fashion. This is applicable when using Keras with multi-worker/TPU distribution strategy, and by using strategy.experimental_distribute_dataset(). In other cases, this option does nothing. If None, defaults to True.
|
num_devices
|
The number of devices attached to this input pipeline. This will be automatically set by MultiDeviceIterator.
|
Methods
__eq__
View source
__eq__(
other
)
Return self==value.
__ne__
View source
__ne__(
other
)
Return self!=value.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.data.experimental.DistributeOptions\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 2 version](/api_docs/python/tf/data/experimental/DistributeOptions) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/data/experimental/ops/distribute_options.py#L25-L63) |\n\nRepresents options for distributed data processing.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.data.experimental.DistributeOptions`](/api_docs/python/tf/data/experimental/DistributeOptions), \\`tf.compat.v2.data.experimental.DistributeOptions\\`\n\n\u003cbr /\u003e\n\n tf.data.experimental.DistributeOptions()\n\nYou can set the distribution options of a dataset through the\n`experimental_distribute` property of [`tf.data.Options`](../../../tf/data/Options); the property is\nan instance of [`tf.data.experimental.DistributeOptions`](../../../tf/data/experimental/DistributeOptions). \n\n options = tf.data.Options()\n options.experimental_distribute.auto_shard = False\n dataset = dataset.with_options(options)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `auto_shard` | Whether the dataset should be automatically sharded when processedin a distributed fashion. This is applicable when using Keras with multi-worker/TPU distribution strategy, and by using strategy.experimental_distribute_dataset(). In other cases, this option does nothing. If None, defaults to True. |\n| `num_devices` | The number of devices attached to this input pipeline. This will be automatically set by MultiDeviceIterator. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__eq__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/data/util/options.py#L37-L43) \n\n __eq__(\n other\n )\n\nReturn self==value.\n\n### `__ne__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/data/util/options.py#L45-L49) \n\n __ne__(\n other\n )\n\nReturn self!=value."]]