tfr.keras.strategy_utils.get_strategy
Stay organized with collections
Save and categorize content based on your preferences.
Creates and initializes the requested tf.distribute strategy.
tfr.keras.strategy_utils.get_strategy(
strategy: str,
cluster_resolver: Optional[tf.distribute.cluster_resolver.ClusterResolver] = None,
variable_partitioner: Optional[tf.distribute.experimental.partitioners.Partitioner] = _USE_DEFAULT_VARIABLE_PARTITIONER,
tpu: Optional[str] = ''
) -> Union[None, tf.distribute.MirroredStrategy, tf.distribute.
MultiWorkerMirroredStrategy, tf.distribute.experimental.
ParameterServerStrategy, tf.distribute.experimental.TPUStrategy]
Example usage:
strategy = get_strategy("MirroredStrategy")
Returns |
A strategy will be used for distributed training.
|
Raises |
ValueError if strategy is not supported.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-08-18 UTC.
[null,null,["Last updated 2023-08-18 UTC."],[],[],null,["# tfr.keras.strategy_utils.get_strategy\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/ranking/blob/v0.5.3/tensorflow_ranking/python/keras/strategy_utils.py#L45-L116) |\n\nCreates and initializes the requested tf.distribute strategy. \n\n tfr.keras.strategy_utils.get_strategy(\n strategy: str,\n cluster_resolver: Optional[tf.distribute.cluster_resolver.ClusterResolver] = None,\n variable_partitioner: Optional[tf.distribute.experimental.partitioners.Partitioner] = _USE_DEFAULT_VARIABLE_PARTITIONER,\n tpu: Optional[str] = ''\n ) -\u003e Union[None, tf.distribute.MirroredStrategy, tf.distribute.\n MultiWorkerMirroredStrategy, tf.distribute.experimental.\n ParameterServerStrategy, tf.distribute.experimental.TPUStrategy]\n\n#### Example usage:\n\n strategy = get_strategy(\"MirroredStrategy\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `strategy` | Key for a [`tf.distribute`](https://www.tensorflow.org/api_docs/python/tf/distribute) strategy to be used to train the model. Choose from \\[\"MirroredStrategy\", \"MultiWorkerMirroredStrategy\", \"ParameterServerStrategy\", \"TPUStrategy\"\\]. If None, no distributed strategy will be used. |\n| `cluster_resolver` | A cluster_resolver to build strategy. |\n| `variable_partitioner` | Variable partitioner to be used in ParameterServerStrategy. If the argument is not specified, a recommended [`tf.distribute.experimental.partitioners.MinSizePartitioner`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/partitioners/MinSizePartitioner) is used. If the argument is explicitly specified as `None`, no partitioner is used and that variables are not partitioned. This arg is used only when the strategy is [`tf.distribute.experimental.ParameterServerStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy). See [`tf.distribute.experimental.ParameterServerStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy) class doc for more information. |\n| `tpu` | TPU address for TPUStrategy. Not used for other strategy. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A strategy will be used for distributed training. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|---|---|\n| ValueError if `strategy` is not supported. ||\n\n\u003cbr /\u003e"]]