If enabled, ops can be placed on different devices than the device explicitly
assigned by the user. This potentially has a large performance cost due to an
increase in data communication between devices.
Some cases where soft_device_placement would modify device assignment are:
no GPU/TPU implementation for the OP
no GPU devices are known or registered
need to co-locate with reftype input(s) which are from CPU
an OP can not be compiled by XLA. Common for TPU which always requires
the XLA compiler.
For TPUs, if this option is true, a feature called automatic outside
compilation is enabled. Automatic outside compilation will move uncompilable
ops within a TPU program to instead run on the host. This can be used when
encountering compilation failures due to unsupported ops.
Args
enabled
A boolean indicating whether to enable soft placement.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.config.set_soft_device_placement\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.14.0/tensorflow/python/framework/config.py#L283-L309) |\n\nEnable or disable soft device placement.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.config.set_soft_device_placement`](https://www.tensorflow.org/api_docs/python/tf/config/set_soft_device_placement)\n\n\u003cbr /\u003e\n\n tf.config.set_soft_device_placement(\n enabled\n )\n\nIf enabled, ops can be placed on different devices than the device explicitly\nassigned by the user. This potentially has a large performance cost due to an\nincrease in data communication between devices.\n\nSome cases where soft_device_placement would modify device assignment are:\n\n1. no GPU/TPU implementation for the OP\n2. no GPU devices are known or registered\n3. need to co-locate with reftype input(s) which are from CPU\n4. an OP can not be compiled by XLA. Common for TPU which always requires the XLA compiler.\n\nFor TPUs, if this option is true, a feature called automatic outside\ncompilation is enabled. Automatic outside compilation will move uncompilable\nops within a TPU program to instead run on the host. This can be used when\nencountering compilation failures due to unsupported ops.\n| **Note:** by default soft device placement is enabled when running in eager mode (for convenience) and disabled in graph mode (for performance).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------|--------------------------------------------------------|\n| `enabled` | A boolean indicating whether to enable soft placement. |\n\n\u003cbr /\u003e"]]