tfq.set_quantum_concurrent_op_mode
Set the global op latency mode in execution context.
tfq.set_quantum_concurrent_op_mode(
mode
)
This is advanced TFQ feature that should be used only in very specific
cases. Namely if memory requirements on simulation are extremely large
OR when executing against a true chip.
If you are going to make use of this function please call it at the top
of your module right after import:
import tensorflow_quantum as tfq
tfq.set_quantum_concurrent_op_mode(False)
Args |
mode
|
Python bool indicating whether or not circuit executing ops
should block graph level parallelism. Advanced users should
set mode=False when executing very large simulation workloads
or when executing against a real quantum chip.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-05-17 UTC.
[null,null,["Last updated 2024-05-17 UTC."],[],[]]