View source on GitHub
|
A Layer that calculates an expectation value.
tfq.layers.Expectation(
backend='noiseless', differentiator=None, **kwargs
)
Used in the notebooks
| Used in the tutorials |
|---|
Input shape | |
|---|---|
|
Output shape | |
|---|---|
| tf.Tensor of shape [batch_size, n_ops], expectation values for each circuit and operator. |
Given an input circuit and set of parameter values, prepare a quantum state and output expectation values taken on that state with respect to some observables to the tensorflow graph.
First define a simple helper function for generating a parametrized quantum circuit that we will use throughout:
def _gen_single_bit_rotation_problem(bit, symbols):"""Generate a toy problem on 1 qubit."""starting_state = [0.123, 0.456, 0.789]circuit = cirq.Circuit(cirq.rx(starting_state[0])(bit),cirq.ry(starting_state[1])(bit),cirq.rz(starting_state[2])(bit),cirq.rz(symbols[2])(bit),cirq.ry(symbols[1])(bit),cirq.rx(symbols[0])(bit))return circuit
In quantum machine learning there are two very common use cases that align with keras layer constructs. The first is where the circuits represent the input data points (see the note at the bottom about using compiled models):
bit = cirq.GridQubit(0, 0)symbols = sympy.symbols('x, y, z')ops = [-1.0 * cirq.Z(bit), cirq.X(bit) + 2.0 * cirq.Z(bit)]circuit_list = [_gen_single_bit_rotation_problem(bit, symbols),cirq.Circuit(cirq.Z(bit) ** symbols[0],cirq.X(bit) ** symbols[1],cirq.Z(bit) ** symbols[2]),cirq.Circuit(cirq.X(bit) ** symbols[0],cirq.Z(bit) ** symbols[1],cirq.X(bit) ** symbols[2])]expectation_layer = tfq.layers.Expectation()output = expectation_layer(circuit_list, symbol_names=symbols, operators = ops)# Here output[i][j] corresponds to the expectation of all the ops# in ops w.r.t circuits[i] where keras managed variables are# placed in the symbols 'x', 'y', 'z'.tf.shape(output)tf.Tensor([3 2], shape=(2,), dtype=int32)
Here, different cirq.Circuit instances sharing the common symbols 'x',
'y' and 'z' are used as input. Keras uses the symbol_names
argument to map Keras managed variables to these circuits constructed
with sympy.Symbols. Note that you used a Python list containing your
circuits, you could also specify a tf.keras.Input layer or any
tensorlike object to specify the circuits you would like fed to the layer
at runtime.
Another common use case is where there is a fixed circuit and the expectation operators vary:
bit = cirq.GridQubit(0, 0)symbols = sympy.symbols('x, y, z')ops = [-1.0 * cirq.Z(bit), cirq.X(bit) + 2.0 * cirq.Z(bit)]fixed_circuit = _gen_single_bit_rotation_problem(bit, symbols)expectation_layer = tfq.layers.Expectation()output = expectation_layer(fixed_circuit,symbol_names=symbols,operators=ops,initializer=tf.keras.initializers.RandomUniform(0, 2 * np.pi))# Here output[i][j] corresponds to# the expectation of operators[i][j] w.r.t the circuit where# variable values are managed by keras and store numbers in# the symbols 'x', 'y', 'z'.tf.shape(output)tf.Tensor([1 2], shape=(2,), dtype=int32)
Note that in the above examples you used a cirq.Circuit object and a list
of cirq.PauliSum objects as inputs to your layer. To allow for varying
inputs your could change the line in the above code to:
expectation_layer(circuit_inputs, symbol_names=symbols, operators=ops)
with circuit_inputs is tf.keras.Input(shape=(), dtype=tf.dtypes.string)
to allow you to pass in different circuits in a compiled model. Lastly
you also supplied a tf.keras.initializer to the initializer argument.
This argument is optional in the case that the layer itself will be managing
the symbols of the circuit and not have them fed in from somewhere else in
the model.
There are also some more complex use cases. Notably these use cases all
make use of the symbol_values parameter that causes the
Expectation layer to stop managing the sympy.Symbols in the quantum
circuits for the user and instead require them to supply input
values themselves. Lets look at the case where there
is a single fixed circuit, some fixed operators and symbols that must be
common to all circuits:
bit = cirq.GridQubit(0, 0)symbols = sympy.symbols('x, y, z')ops = [cirq.Z(bit), cirq.X(bit)]circuit = _gen_single_bit_rotation_problem(bit, symbols)values = [[1,1,1], [2,2,2], [3,3,3]]expectation_layer = tfq.layers.Expectation()output = expectation_layer(circuit,symbol_names=symbols,symbol_values=values,operators=ops)# output[i][j] = The expectation of operators[j] with# values[i] placed into the symbols of the circuit# with the order specified by symbol_names.# so output[1][2] = The expectation of your circuit with parameter# values [2,2,2] w.r.t Pauli X.outputtf.Tensor([[0.63005245 0.76338404][0.25707167 0.9632684 ][0.79086655 0.5441111 ]], shape=(3, 2), dtype=float32)
Here is a simple model that uses this particular input signature of
tfq.layers.Expectation, that learns to undo the random rotation
of the qubit:
bit = cirq.GridQubit(0, 0)symbols = sympy.symbols('x, y, z')circuit = _gen_single_bit_rotation_problem(bit, symbols)control_input = tf.keras.Input(shape=(1,))circuit_inputs = tf.keras.Input(shape=(), dtype=tf.dtypes.string)d1 = tf.keras.layers.Dense(10)(control_input)d2 = tf.keras.layers.Dense(3)(d1)expectation = tfq.layers.Expectation()(circuit_inputs, # See note below!symbol_names=symbols,symbol_values=d2,operators=cirq.Z(bit))data_in = np.array([[1], [0]], dtype=np.float32)data_out = np.array([[1], [-1]], dtype=np.float32)model = tf.keras.Model(inputs=[circuit_inputs, control_input], outputs=expectation)model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),loss=tf.keras.losses.mean_squared_error)history = model.fit(x=[tfq.convert_to_tensor([circuit] * 2), data_in],y=data_out,epochs=100)
Lastly symbol_values, operators and circuit inputs can all be fed
Python list objects. In addition to this they can also be fed tf.Tensor
inputs, meaning that you can input all of these things from other Tensor
objects (like tf.keras.Dense layer outputs or tf.keras.Inputs etc).
Methods
add_loss
add_loss(
loss
)
Can be called inside of the call() method to add a scalar loss.
Example:
class MyLayer(Layer):
...
def call(self, x):
self.add_loss(ops.sum(x))
return x
add_metric
add_metric(
*args, **kwargs
)
add_variable
add_variable(
shape,
initializer,
dtype=None,
trainable=True,
autocast=True,
regularizer=None,
constraint=None,
name=None
)
Add a weight variable to the layer.
Alias of add_weight().
add_weight
add_weight(
shape=None,
initializer=None,
dtype=None,
trainable=True,
autocast=True,
regularizer=None,
constraint=None,
aggregation='none',
overwrite_with_gradient=False,
name=None
)
Add a weight variable to the layer.
| Args | |
|---|---|
shape
|
Shape tuple for the variable. Must be fully-defined
(no None entries). Defaults to () (scalar) if unspecified.
|
initializer
|
Initializer object to use to populate the initial
variable value, or string name of a built-in initializer
(e.g. "random_normal"). If unspecified, defaults to
"glorot_uniform" for floating-point variables and to "zeros"
for all other types (e.g. int, bool).
|
dtype
|
Dtype of the variable to create, e.g. "float32". If
unspecified, defaults to the layer's variable dtype
(which itself defaults to "float32" if unspecified).
|
trainable
|
Boolean, whether the variable should be trainable via
backprop or whether its updates are managed manually. Defaults
to True.
|
autocast
|
Boolean, whether to autocast layers variables when
accessing them. Defaults to True.
|
regularizer
|
Regularizer object to call to apply penalty on the
weight. These penalties are summed into the loss function
during optimization. Defaults to None.
|
constraint
|
Contrainst object to call on the variable after any
optimizer update, or string name of a built-in constraint.
Defaults to None.
|
aggregation
|
Optional string, one of None, "none", "mean",
"sum" or "only_first_replica". Annotates the variable with
the type of multi-replica aggregation to be used for this
variable when writing custom data parallel training loops.
Defaults to "none".
|
overwrite_with_gradient
|
Boolean, whether to overwrite the variable
with the computed gradient. This is useful for float8 training.
Defaults to False.
|
name
|
String name of the variable. Useful for debugging purposes. |
build
build(
input_shape
)
build_from_config
build_from_config(
config
)
Builds the layer's states with the supplied config dict.
By default, this method calls the build(config["input_shape"]) method,
which creates weights based on the layer's input shape in the supplied
config. If your config contains other information needed to load the
layer's state, you should override this method.
| Args | |
|---|---|
config
|
Dict containing the input shape associated with this layer. |
call
call(
inputs,
*,
symbol_names=None,
symbol_values=None,
operators=None,
repetitions=None,
initializer=tf.keras.initializers.RandomUniform(0, 2 * np.pi)
)
Keras call function.
| Args | |
|---|---|
inputs
|
tf.Tensor or list
Circuits to execute, shape [batch_size]. |
symbol_names
|
list or tf.Tensor, optional
Names of circuit parameters, shape [n_symbols]. |
symbol_values
|
tf.Tensor, optional
Values for circuit parameters, shape [batch_size, n_symbols]. |
operators
|
list or tf.Tensor, optional
Observables to measure, shape [n_ops]. |
repetitions
|
int or tf.Tensor, optional
Number of measurement repetitions (for noisy backend). |
initializer
|
tf.keras.initializers.Initializer, optional
Initializer for circuit parameters. |
| Returns | |
|---|---|
tf.Tensor
|
Tensor of shape [batch_size, n_ops] that holds the expectation value for each circuit with each op applied to it (after resolving the corresponding parameters in). |
compute_mask
compute_mask(
inputs, previous_mask
)
compute_output_shape
compute_output_shape(
*args, **kwargs
)
compute_output_spec
compute_output_spec(
*args, **kwargs
)
count_params
count_params()
Count the total number of scalars composing the weights.
| Returns | |
|---|---|
| An integer count. |
from_config
@classmethodfrom_config( config )
Creates an operation from its config.
This method is the reverse of get_config, capable of instantiating the
same operation from the config dictionary.
if "dtype" in config and isinstance(config["dtype"], dict):
policy = dtype_policies.deserialize(config["dtype"])
| Args | |
|---|---|
config
|
A Python dictionary, typically the output of get_config.
|
| Returns | |
|---|---|
| An operation instance. |
get_build_config
get_build_config()
Returns a dictionary with the layer's input shape.
This method returns a config dict that can be used by
build_from_config(config) to create all states (e.g. Variables and
Lookup tables) needed by the layer.
By default, the config only contains the input shape that the layer was built with. If you're writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.
| Returns | |
|---|---|
| A dict containing the input shape associated with the layer. |
get_config
get_config()
Returns the config of the object.
An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.
get_weights
get_weights()
Return the values of layer.weights as a list of NumPy arrays.
load_own_variables
load_own_variables(
store
)
Loads the state of the layer.
You can override this method to take full control of how the state of
the layer is loaded upon calling keras.models.load_model().
| Args | |
|---|---|
store
|
Dict from which the state of the model will be loaded. |
quantize
quantize(
mode, type_check=True
)
quantized_build
quantized_build(
input_shape, mode
)
quantized_call
quantized_call(
*args, **kwargs
)
rematerialized_call
rematerialized_call(
layer_call, *args, **kwargs
)
Enable rematerialization dynamically for layer's call method.
| Args | |
|---|---|
layer_call
|
The original call method of a layer.
|
| Returns | |
|---|---|
Rematerialized layer's call method.
|
save_own_variables
save_own_variables(
store
)
Saves the state of the layer.
You can override this method to take full control of how the state of
the layer is saved upon calling model.save().
| Args | |
|---|---|
store
|
Dict where the state of the model will be saved. |
set_weights
set_weights(
weights
)
Sets the values of layer.weights from a list of NumPy arrays.
stateless_call
stateless_call(
trainable_variables,
non_trainable_variables,
*args,
return_losses=False,
**kwargs
)
Call the layer without any side effects.
| Args | |
|---|---|
trainable_variables
|
List of trainable variables of the model. |
non_trainable_variables
|
List of non-trainable variables of the model. |
*args
|
Positional arguments to be passed to call().
|
return_losses
|
If True, stateless_call() will return the list of
losses created during call() as part of its return values.
|
**kwargs
|
Keyword arguments to be passed to call().
|
| Returns | |
|---|---|
A tuple. By default, returns (outputs, non_trainable_variables).
If return_losses = True, then returns
(outputs, non_trainable_variables, losses).
|
Example:
model = ...
data = ...
trainable_variables = model.trainable_variables
non_trainable_variables = model.non_trainable_variables
# Call the model with zero side effects
outputs, non_trainable_variables = model.stateless_call(
trainable_variables,
non_trainable_variables,
data,
)
# Attach the updated state to the model
# (until you do this, the model is still in its pre-call state).
for ref_var, value in zip(
model.non_trainable_variables, non_trainable_variables
):
ref_var.assign(value)
symbolic_call
symbolic_call(
*args, **kwargs
)
__call__
__call__(
*args, **kwargs
)
Call self as a function.
View source on GitHub