tfq.get_expectation_op
Stay organized with collections
Save and categorize content based on your preferences.
Get a TensorFlow op that will calculate batches of expectation values.
tfq.get_expectation_op(
backend=None,
*,
quantum_concurrent=quantum_context.get_quantum_concurrent_op_mode()
)
This function produces a non-differentiable TF op that will calculate
batches of expectation values given tensor batches of cirq.Circuit
s,
parameter values, and cirq.PauliSum
operators to measure.
# Simulate circuits with C++.
my_op = tfq.get_expectation_op()
# Prepare some inputs.
qubit = cirq.GridQubit(0, 0)
my_symbol = sympy.Symbol('alpha')
my_circuit_tensor = tfq.convert_to_tensor([
cirq.Circuit(cirq.H(qubit) ** my_symbol)
])
my_values = np.array([[0.123]])
my_paulis = tfq.convert_to_tensor([[
3.5 * cirq.X(qubit) - 2.2 * cirq.Y(qubit)
]])
# This op can now be run with:
output = my_op(
my_circuit_tensor, ['alpha'], my_values, my_paulis)
output
tf.Tensor([[0.71530885]], shape=(1, 1), dtype=float32)
In order to make the op differentiable, a tfq.differentiator
object is
needed. see tfq.differentiators
for more details. Below is a simple
example of how to make my_op from the above code block differentiable:
diff = tfq.differentiators.ForwardDifference()
my_differentiable_op = diff.generate_differentiable_op(
analytic_op=my_op
)
Args |
backend
|
Optional Python object that specifies what backend this op
should use when evaluating circuits. Can be
cirq.DensityMatrixSimulator or any
cirq.sim.simulator.SimulatesExpectationValues . If not provided the
default C++ analytical expectation calculation op is returned.
|
quantum_concurrent
|
Optional Python bool . True indicates that the
returned op should not block graph level parallelism on itself when
executing. False indicates that graph level parallelism on itself
should be blocked. Defaults to value specified in
tfq.get_quantum_concurrent_op_mode which defaults to True
(no blocking). This flag is only needed for advanced users when
using TFQ for very large simulations, or when running on a real
chip.
|
Returns |
A callable with the following signature:
op(programs, symbol_names, symbol_values, pauli_sums)
|
programs
|
tf.Tensor of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
|
symbol_names
|
tf.Tensor of strings with shape [n_params], which
is used to specify the order in which the values in
symbol_values should be placed inside of the circuits in
programs .
|
symbol_values
|
tf.Tensor of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by symbol_names .
|
pauli_sums
|
tf.Tensor of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
|
Returns
|
tf.Tensor with shape [batch_size, n_ops] that holds the
expectation value for each circuit with each op applied to it
(after resolving the corresponding parameters in).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-05-17 UTC.
[null,null,["Last updated 2024-05-17 UTC."],[],[],null,["# tfq.get_expectation_op\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/quantum/blob/v0.7.3/tensorflow_quantum/core/ops/circuit_execution_ops.py#L41-L151) |\n\nGet a TensorFlow op that will calculate batches of expectation values. \n\n tfq.get_expectation_op(\n backend=None,\n *,\n quantum_concurrent=quantum_context.get_quantum_concurrent_op_mode()\n )\n\nThis function produces a non-differentiable TF op that will calculate\nbatches of expectation values given tensor batches of [`cirq.Circuit`](https://quantumai.google/reference/python/cirq/Circuit)s,\nparameter values, and [`cirq.PauliSum`](https://quantumai.google/reference/python/cirq/PauliSum) operators to measure. \n\n # Simulate circuits with C++.\n my_op = tfq.get_expectation_op()\n # Prepare some inputs.\n qubit = cirq.GridQubit(0, 0)\n my_symbol = sympy.Symbol('alpha')\n my_circuit_tensor = tfq.convert_to_tensor([\n cirq.Circuit(cirq.H(qubit) ** my_symbol)\n ])\n my_values = np.array([[0.123]])\n my_paulis = tfq.convert_to_tensor([[\n 3.5 * cirq.X(qubit) - 2.2 * cirq.Y(qubit)\n ]])\n # This op can now be run with:\n output = my_op(\n my_circuit_tensor, ['alpha'], my_values, my_paulis)\n output\n tf.Tensor([[0.71530885]], shape=(1, 1), dtype=float32)\n\nIn order to make the op differentiable, a `tfq.differentiator` object is\nneeded. see [`tfq.differentiators`](../tfq/differentiators) for more details. Below is a simple\nexample of how to make my_op from the above code block differentiable: \n\n diff = tfq.differentiators.ForwardDifference()\n my_differentiable_op = diff.generate_differentiable_op(\n analytic_op=my_op\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `backend` | Optional Python `object` that specifies what backend this op should use when evaluating circuits. Can be [`cirq.DensityMatrixSimulator`](https://quantumai.google/reference/python/cirq/DensityMatrixSimulator) or any `cirq.sim.simulator.SimulatesExpectationValues`. If not provided the default C++ analytical expectation calculation op is returned. |\n| `quantum_concurrent` | Optional Python `bool`. True indicates that the returned op should not block graph level parallelism on itself when executing. False indicates that graph level parallelism on itself should be blocked. Defaults to value specified in [`tfq.get_quantum_concurrent_op_mode`](../tfq/get_quantum_concurrent_op_mode) which defaults to True (no blocking). This flag is only needed for advanced users when using TFQ for very large simulations, or when running on a real chip. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| A `callable` with the following signature: \u003cbr /\u003e `op(programs, symbol_names, symbol_values, pauli_sums)` ||\n| `programs` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of strings with shape \\[batch_size\\] containing the string representations of the circuits to be executed. |\n| `symbol_names` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of strings with shape \\[n_params\\], which is used to specify the order in which the values in `symbol_values` should be placed inside of the circuits in `programs`. |\n| `symbol_values` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of real numbers with shape \\[batch_size, n_params\\] specifying parameter values to resolve into the circuits specified by programs, following the ordering dictated by `symbol_names`. |\n| `pauli_sums` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) of strings with shape \\[batch_size, n_ops\\] containing the string representation of the operators that will be used on all of the circuits in the expectation calculations. |\n| `Returns` | [`tf.Tensor`](https://www.tensorflow.org/api_docs/python/tf/Tensor) with shape \\[batch_size, n_ops\\] that holds the expectation value for each circuit with each op applied to it (after resolving the corresponding parameters in). |\n\n\u003cbr /\u003e"]]