tf.test.experimental.sync_devices
Stay organized with collections
Save and categorize content based on your preferences.
Synchronizes all devices.
tf.test.experimental.sync_devices() -> None
By default, GPUs run asynchronously. This means that when you run an op on the
GPU, like tf.linalg.matmul
, the op may still be running on the GPU when the
function returns. Non-GPU devices can also be made to run asynchronously by
calling tf.config.experimental.set_synchronous_execution(False)
. Calling
sync_devices()
blocks until pending ops have finished executing. This is
primarily useful for measuring performance during a benchmark.
For example, here is how you can measure how long tf.linalg.matmul
runs:
import time
x = tf.random.normal((4096, 4096))
tf.linalg.matmul(x, x) # Warmup.
tf.test.experimental.sync_devices() # Block until warmup has completed.
start = time.time()
y = tf.linalg.matmul(x, x)
tf.test.experimental.sync_devices() # Block until matmul has completed.
end = time.time()
print(f'Time taken: {end - start}')
If the call to sync_devices()
was omitted, the time printed could be too
small. This is because the op could still be running asynchronously when
the line end = time.time()
is executed.
Raises |
RuntimeError
|
If run outside Eager mode. This must be called in Eager mode,
outside any tf.function s.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.test.experimental.sync_devices\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/framework/test_util.py#L4232-L4285) |\n\nSynchronizes all devices.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.test.experimental.sync_devices`](https://www.tensorflow.org/api_docs/python/tf/test/experimental/sync_devices)\n\n\u003cbr /\u003e\n\n tf.test.experimental.sync_devices() -\u003e None\n\nBy default, GPUs run asynchronously. This means that when you run an op on the\nGPU, like [`tf.linalg.matmul`](../../../tf/linalg/matmul), the op may still be running on the GPU when the\nfunction returns. Non-GPU devices can also be made to run asynchronously by\ncalling [`tf.config.experimental.set_synchronous_execution(False)`](../../../tf/config/experimental/set_synchronous_execution). Calling\n`sync_devices()` blocks until pending ops have finished executing. This is\nprimarily useful for measuring performance during a benchmark.\n\nFor example, here is how you can measure how long [`tf.linalg.matmul`](../../../tf/linalg/matmul) runs: \n\n import time\n x = tf.random.normal((4096, 4096))\n tf.linalg.matmul(x, x) # Warmup.\n tf.test.experimental.sync_devices() # Block until warmup has completed.\n\n start = time.time()\n y = tf.linalg.matmul(x, x)\n tf.test.experimental.sync_devices() # Block until matmul has completed.\n end = time.time()\n print(f'Time taken: {end - start}')\n\nIf the call to `sync_devices()` was omitted, the time printed could be too\nsmall. This is because the op could still be running asynchronously when\nthe line `end = time.time()` is executed.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|-------------------------------------------------------------------------------------------------------------------|\n| `RuntimeError` | If run outside Eager mode. This must be called in Eager mode, outside any [`tf.function`](../../../tf/function)s. |\n\n\u003cbr /\u003e"]]