a (major,minor) pair that indicates the minimum
CUDA compute capability required, or None if no requirement.
Note that the keyword arg name "cuda_only" is misleading (since routine will
return true when a GPU device is available irrespective of whether TF was
built with CUDA support or ROCm support. However no changes here because
++ Changing the name "cuda_only" to something more generic would break
backward compatibility
++ Adding an equivalent "rocm_only" would require the implementation check
the build type. This in turn would require doing the same for CUDA and thus
potentially break backward compatibility
++ Adding a new "cuda_or_rocm_only" would not break backward compatibility,
but would require most (if not all) callers to update the call to use
"cuda_or_rocm_only" instead of "cuda_only"
Returns
True if a GPU device of the requested kind is available.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.test.is_gpu_available\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/test/is_gpu_available) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/framework/test_util.py#L1385-L1447) |\n\nReturns whether TensorFlow can access a GPU.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.test.is_gpu_available`](/api_docs/python/tf/test/is_gpu_available)\n\n\u003cbr /\u003e\n\n tf.test.is_gpu_available(\n cuda_only=False, min_cuda_compute_capability=None\n )\n\n| **Warning:** if a non-GPU version of the package is installed, the function would also return False. Use [`tf.test.is_built_with_cuda`](../../tf/test/is_built_with_cuda) to validate if TensorFlow was build with CUDA support.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------|--------------------------------------------------------------------------------------------------------------|\n| `cuda_only` | limit the search to CUDA GPUs. |\n| `min_cuda_compute_capability` | a (major,minor) pair that indicates the minimum CUDA compute capability required, or None if no requirement. |\n\n\u003cbr /\u003e\n\nNote that the keyword arg name \"cuda_only\" is misleading (since routine will\nreturn true when a GPU device is available irrespective of whether TF was\nbuilt with CUDA support or ROCm support. However no changes here because\n\n++ Changing the name \"cuda_only\" to something more generic would break\nbackward compatibility\n\n++ Adding an equivalent \"rocm_only\" would require the implementation check\nthe build type. This in turn would require doing the same for CUDA and thus\npotentially break backward compatibility\n\n++ Adding a new \"cuda_or_rocm_only\" would not break backward compatibility,\nbut would require most (if not all) callers to update the call to use\n\"cuda_or_rocm_only\" instead of \"cuda_only\"\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| True if a GPU device of the requested kind is available. ||\n\n\u003cbr /\u003e"]]