tf.config.get_soft_device_placement
Stay organized with collections
Save and categorize content based on your preferences.
Return status of soft device placement flag.
tf.config.get_soft_device_placement()
If enabled, ops can be placed on different devices than the device explicitly
assigned by the user. This potentially has a large performance cost due to an
increase in data communication between devices.
Some cases where soft_device_placement would modify device assignment are:
- no GPU/TPU implementation for the OP
- no GPU devices are known or registered
- need to co-locate with reftype input(s) which are from CPU
- an OP can not be compiled by XLA. Common for TPU which always requires
the XLA compiler.
For TPUs, if this option is true, a feature called automatic outside
compilation is enabled. Automatic outside compilation will move uncompilable
ops within a TPU program to instead run on the host. This can be used when
encountering compilation failures due to unsupported ops.
Returns |
A boolean indicating if soft placement is enabled.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.config.get_soft_device_placement\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.14.0/tensorflow/python/framework/config.py#L257-L280) |\n\nReturn status of soft device placement flag.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.config.get_soft_device_placement`](https://www.tensorflow.org/api_docs/python/tf/config/get_soft_device_placement)\n\n\u003cbr /\u003e\n\n tf.config.get_soft_device_placement()\n\nIf enabled, ops can be placed on different devices than the device explicitly\nassigned by the user. This potentially has a large performance cost due to an\nincrease in data communication between devices.\n\nSome cases where soft_device_placement would modify device assignment are:\n\n1. no GPU/TPU implementation for the OP\n2. no GPU devices are known or registered\n3. need to co-locate with reftype input(s) which are from CPU\n4. an OP can not be compiled by XLA. Common for TPU which always requires the XLA compiler.\n\nFor TPUs, if this option is true, a feature called automatic outside\ncompilation is enabled. Automatic outside compilation will move uncompilable\nops within a TPU program to instead run on the host. This can be used when\nencountering compilation failures due to unsupported ops.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A boolean indicating if soft placement is enabled. ||\n\n\u003cbr /\u003e"]]