tf.tpu.experimental.shutdown_tpu_system
Stay organized with collections
Save and categorize content based on your preferences.
Shuts down the TPU devices.
tf.tpu.experimental.shutdown_tpu_system(
cluster_resolver=None
)
This will clear all caches, even those that are maintained through sequential
calls to tf.tpu.experimental.initialize_tpu_system, such as the compilation
cache.
Args |
cluster_resolver
|
A tf.distribute.cluster_resolver.TPUClusterResolver,
which provides information about the TPU cluster.
|
Raises |
RuntimeError
|
If no TPU devices found for eager execution or if run in a
tf.function.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-03-17 UTC.
[null,null,["Last updated 2023-03-17 UTC."],[],[],null,["# tf.tpu.experimental.shutdown_tpu_system\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.9.3/tensorflow/python/tpu/tpu_strategy_util.py#L159-L238) |\n\nShuts down the TPU devices.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tpu.experimental.shutdown_tpu_system`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/shutdown_tpu_system)\n\n\u003cbr /\u003e\n\n tf.tpu.experimental.shutdown_tpu_system(\n cluster_resolver=None\n )\n\nThis will clear all caches, even those that are maintained through sequential\ncalls to tf.tpu.experimental.initialize_tpu_system, such as the compilation\ncache.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|--------------------------------------------------------------------------------------------------------|\n| `cluster_resolver` | A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|-------------------------------------------------------------------------|\n| `RuntimeError` | If no TPU devices found for eager execution or if run in a tf.function. |\n\n\u003cbr /\u003e"]]