Noise

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

Noise is present in modern day quantum computers. Qubits are susceptible to interference from the surrounding environment, imperfect fabrication, TLS and sometimes even gamma rays. Until large scale error correction is reached, the algorithms of today must be able to remain functional in the presence of noise. This makes testing algorithms under noise an important step for validating quantum algorithms / models will function on the quantum computers of today.

In this tutorial you will explore the basics of noisy circuit simulation in TFQ via the high level tfq.layers API.

Setup

pip install tensorflow==2.7.0 tensorflow-quantum==0.7.2
pip install -q git+https://github.com/tensorflow/docs
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
<module 'pkg_resources' from '/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/pkg_resources/__init__.py'>
import random
import cirq
import sympy
import tensorflow_quantum as tfq
import tensorflow as tf
import numpy as np
# Plotting
import matplotlib.pyplot as plt
import tensorflow_docs as tfdocs
import tensorflow_docs.plots
2023-03-21 11:41:04.781303: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

1. Understanding quantum noise

1.1 Basic circuit noise

Noise on a quantum computer impacts the bitstring samples you are able to measure from it. One intuitive way you can start to think about this is that a noisy quantum computer will "insert", "delete" or "replace" gates in random places like the diagram below:

Building off of this intuition, when dealing with noise, you are no longer using a single pure state \(|\psi \rangle\) but instead dealing with an ensemble of all possible noisy realizations of your desired circuit: \(\rho = \sum_j p_j |\psi_j \rangle \langle \psi_j |\) . Where \(p_j\) gives the probability that the system is in \(|\psi_j \rangle\) .

Revisiting the above picture, if we knew beforehand that 90% of the time our system executed perfectly, or errored 10% of the time with just this one mode of failure, then our ensemble would be:

\(\rho = 0.9 |\psi_\text{desired} \rangle \langle \psi_\text{desired}| + 0.1 |\psi_\text{noisy} \rangle \langle \psi_\text{noisy}| \)

If there was more than just one way that our circuit could error, then the ensemble \(\rho\) would contain more than just two terms (one for each new noisy realization that could happen). \(\rho\) is referred to as the density matrix describing your noisy system.

1.2 Using channels to model circuit noise

Unfortunately in practice it's nearly impossible to know all the ways your circuit might error and their exact probabilities. A simplifying assumption you can make is that after each operation in your circuit there is some kind of channel that roughly captures how that operation might error. You can quickly create a circuit with some noise:

def x_circuit(qubits):
  """Produces an X wall circuit on `qubits`."""
  return cirq.Circuit(cirq.X.on_each(*qubits))

def make_noisy(circuit, p):
  """Add a depolarization channel to all qubits in `circuit` before measurement."""
  return circuit + cirq.Circuit(cirq.depolarize(p).on_each(*circuit.all_qubits()))

my_qubits = cirq.GridQubit.rect(1, 2)
my_circuit = x_circuit(my_qubits)
my_noisy_circuit = make_noisy(my_circuit, 0.5)
my_circuit
my_noisy_circuit

You can examine the noiseless density matrix \(\rho\) with:

rho = cirq.final_density_matrix(my_circuit)
np.round(rho, 3)
array([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
       [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]], dtype=complex64)

And the noisy density matrix \(\rho\) with:

rho = cirq.final_density_matrix(my_noisy_circuit)
np.round(rho, 3)
array([[0.111+0.j, 0.   +0.j, 0.   +0.j, 0.   +0.j],
       [0.   +0.j, 0.222+0.j, 0.   +0.j, 0.   +0.j],
       [0.   +0.j, 0.   +0.j, 0.222+0.j, 0.   +0.j],
       [0.   +0.j, 0.   +0.j, 0.   +0.j, 0.444+0.j]], dtype=complex64)

Comparing the two different \( \rho \) 's you can see that the noise has impacted the amplitudes of the state (and consequently sampling probabilities). In the noiseless case you would always expect to sample the \( |11\rangle \) state. But in the noisy state there is now a nonzero probability of sampling \( |00\rangle \) or \( |01\rangle \) or \( |10\rangle \) as well:

"""Sample from my_noisy_circuit."""
def plot_samples(circuit):
  samples = cirq.sample(circuit + cirq.measure(*circuit.all_qubits(), key='bits'), repetitions=1000)
  freqs, _ = np.histogram(samples.data['bits'], bins=[i+0.01 for i in range(-1,2** len(my_qubits))])
  plt.figure(figsize=(10,5))
  plt.title('Noisy Circuit Sampling')
  plt.xlabel('Bitstring')
  plt.ylabel('Frequency')
  plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])

plot_samples(my_noisy_circuit)

png

Without any noise you will always get \(|11\rangle\):

"""Sample from my_circuit."""
plot_samples(my_circuit)

png

If you increase the noise a little further it will become harder and harder to distinguish the desired behavior (sampling \(|11\rangle\) ) from the noise:

my_really_noisy_circuit = make_noisy(my_circuit, 0.75)
plot_samples(my_really_noisy_circuit)

png

2. Basic noise in TFQ

With this understanding of how noise can impact circuit execution, you can explore how noise works in TFQ. TensorFlow Quantum uses monte-carlo / trajectory based simulation as an alternative to density matrix simulation. This is because the memory complexity of density matrix simulation limits large simulations to being <= 20 qubits with traditional full density matrix simulation methods. Monte-carlo / trajectory trades this cost in memory for additional cost in time. The backend='noisy' option available to all tfq.layers.Sample, tfq.layers.SampledExpectation and tfq.layers.Expectation (In the case of Expectation this does add a required repetitions parameter).

2.1 Noisy sampling in TFQ

To recreate the above plots using TFQ and trajectory simulation you can use tfq.layers.Sample

"""Draw bitstring samples from `my_noisy_circuit`"""
bitstrings = tfq.layers.Sample(backend='noisy')(my_noisy_circuit, repetitions=1000)
numeric_values = np.einsum('ijk,k->ij', bitstrings.to_tensor().numpy(), [1, 2])[0]
freqs, _ = np.histogram(numeric_values, bins=[i+0.01 for i in range(-1,2** len(my_qubits))])
plt.figure(figsize=(10,5))
plt.title('Noisy Circuit Sampling')
plt.xlabel('Bitstring')
plt.ylabel('Frequency')
plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])
<BarContainer object of 4 artists>

png

2.2 Noisy sample based expectation

To do noisy sample based expectation calculation you can use tfq.layers.SampleExpectation:

some_observables = [cirq.X(my_qubits[0]), cirq.Z(my_qubits[0]), 3.0 * cirq.Y(my_qubits[1]) + 1]
some_observables
[cirq.X(cirq.GridQubit(0, 0)),
 cirq.Z(cirq.GridQubit(0, 0)),
 cirq.PauliSum(cirq.LinearDict({frozenset({(cirq.GridQubit(0, 1), cirq.Y)}): (3+0j), frozenset(): (1+0j)}))]

Compute the noiseless expectation estimates via sampling from the circuit:

noiseless_sampled_expectation = tfq.layers.SampledExpectation(backend='noiseless')(
    my_circuit, operators=some_observables, repetitions=10000
)
noiseless_sampled_expectation.numpy()
array([[-0.0046, -1.    ,  0.9754]], dtype=float32)

Compare those with the noisy versions:

noisy_sampled_expectation = tfq.layers.SampledExpectation(backend='noisy')(
    [my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000
)
noisy_sampled_expectation.numpy()
array([[ 0.005    , -0.3302   ,  1.0126   ],
       [-0.0132   , -0.0292   ,  1.0029999]], dtype=float32)

You can see that the noise has particularly impacted the \(\langle \psi | Z | \psi \rangle\) accuracy, with my_really_noisy_circuit concentrating very quickly towards 0.

2.3 Noisy analytic expectation calculation

Doing noisy analytic expectation calculations is nearly identical to above:

noiseless_analytic_expectation = tfq.layers.Expectation(backend='noiseless')(
    my_circuit, operators=some_observables
)
noiseless_analytic_expectation.numpy()
array([[ 1.9106853e-15, -1.0000000e+00,  1.0000002e+00]], dtype=float32)
noisy_analytic_expectation = tfq.layers.Expectation(backend='noisy')(
    [my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000
)
noisy_analytic_expectation.numpy()
array([[ 1.9106855e-15, -3.2699999e-01,  1.0000000e+00],
       [ 1.9106855e-15,  1.1399999e-02,  1.0000000e+00]], dtype=float32)

3. Hybrid models and quantum data noise

Now that you have implemented some noisy circuit simulations in TFQ, you can experiment with how noise impacts quantum and hybrid quantum classical models, by comparing and contrasting their noisy vs noiseless performance. A good first check to see if a model or algorithm is robust to noise is to test under a circuit wide depolarizing model which looks something like this:

Where each time slice of the circuit (sometimes referred to as moment) has a depolarizing channel appended after each gate operation in that time slice. The depolarizing channel with apply one of \(\{X, Y, Z \}\) with probability \(p\) or apply nothing (keep the original operation) with probability \(1-p\).

3.1 Data

For this example you can use some prepared circuits in the tfq.datasets module as training data:

qubits = cirq.GridQubit.rect(1, 8)
circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')
circuits[0]
184451072/184449737 [==============================] - 2s 0us/step
184459264/184449737 [==============================] - 2s 0us/step

Writing a small helper function will help to generate the data for the noisy vs noiseless case:

def get_data(qubits, depolarize_p=0.):
  """Return quantum data circuits and labels in `tf.Tensor` form."""
  circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')
  if depolarize_p >= 1e-5:
    circuits = [circuit.with_noise(cirq.depolarize(depolarize_p)) for circuit in circuits]
  tmp = list(zip(circuits, labels))
  random.shuffle(tmp)
  circuits_tensor = tfq.convert_to_tensor([x[0] for x in tmp])
  labels_tensor = tf.convert_to_tensor([x[1] for x in tmp])

  return circuits_tensor, labels_tensor

3.2 Define a model circuit

Now that you have quantum data in the form of circuits, you will need a circuit to model this data, like with the data you can write a helper function to generate this circuit optionally containing noise:

def modelling_circuit(qubits, depth, depolarize_p=0.):
  """A simple classifier circuit."""
  dim = len(qubits)
  ret = cirq.Circuit(cirq.H.on_each(*qubits))

  for i in range(depth):
    # Entangle layer.
    ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[::2], qubits[1::2]))
    ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[1::2], qubits[2::2]))
    # Learnable rotation layer.
    # i_params = sympy.symbols(f'layer-{i}-0:{dim}')
    param = sympy.Symbol(f'layer-{i}')
    single_qb = cirq.X
    if i % 2 == 1:
      single_qb = cirq.Y
    ret += cirq.Circuit(single_qb(q) ** param for q in qubits)

  if depolarize_p >= 1e-5:
    ret = ret.with_noise(cirq.depolarize(depolarize_p))

  return ret, [op(q) for q in qubits for op in [cirq.X, cirq.Y, cirq.Z]]

modelling_circuit(qubits, 3)[0]

3.3 Model building and training

With your data and model circuit built, the final helper function you will need is one that can assemble both a noisy or a noiseless hybrid quantum tf.keras.Model:

def build_keras_model(qubits, depolarize_p=0.):
  """Prepare a noisy hybrid quantum classical Keras model."""
  spin_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)

  circuit_and_readout = modelling_circuit(qubits, 4, depolarize_p)
  if depolarize_p >= 1e-5:
    quantum_model = tfq.layers.NoisyPQC(*circuit_and_readout, sample_based=False, repetitions=10)(spin_input)
  else:
    quantum_model = tfq.layers.PQC(*circuit_and_readout)(spin_input)

  intermediate = tf.keras.layers.Dense(4, activation='sigmoid')(quantum_model)
  post_process = tf.keras.layers.Dense(1)(intermediate)

  return tf.keras.Model(inputs=[spin_input], outputs=[post_process])

4. Compare performance

4.1 Noiseless baseline

With your data generation and model building code, you can now compare and contrast model performance in the noiseless and noisy settings, first you can run a reference noiseless training:

training_histories = dict()
depolarize_p = 0.
n_epochs = 50
phase_classifier = build_keras_model(qubits, depolarize_p)

phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
                   loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                   metrics=['accuracy'])


# Show the keras plot of the model
tf.keras.utils.plot_model(phase_classifier, show_shapes=True, dpi=70)

png

noiseless_data, noiseless_labels = get_data(qubits, depolarize_p)
training_histories['noiseless'] = phase_classifier.fit(x=noiseless_data,
                         y=noiseless_labels,
                         batch_size=16,
                         epochs=n_epochs,
                         validation_split=0.15,
                         verbose=1)
Epoch 1/50
4/4 [==============================] - 1s 149ms/step - loss: 0.7575 - accuracy: 0.5312 - val_loss: 0.7520 - val_accuracy: 0.5000
4/4 [==============================] - 0s 89ms/step - loss: 0.7162 - accuracy: 0.5312 - val_loss: 0.7165 - val_accuracy: 1.0000
Epoch 3/50
4/4 [==============================] - 0s 88ms/step - loss: 0.6951 - accuracy: 0.5781 - val_loss: 0.6956 - val_accuracy: 0.5000
Epoch 4/50
4/4 [==============================] - 0s 84ms/step - loss: 0.6812 - accuracy: 0.4688 - val_loss: 0.6855 - val_accuracy: 0.5000
Epoch 5/50
4/4 [==============================] - 0s 86ms/step - loss: 0.6777 - accuracy: 0.4688 - val_loss: 0.6775 - val_accuracy: 0.5000
Epoch 6/50
4/4 [==============================] - 0s 84ms/step - loss: 0.6745 - accuracy: 0.4688 - val_loss: 0.6692 - val_accuracy: 0.5000
Epoch 7/50
4/4 [==============================] - 0s 84ms/step - loss: 0.6634 - accuracy: 0.4688 - val_loss: 0.6599 - val_accuracy: 0.5000
Epoch 8/50
4/4 [==============================] - 0s 83ms/step - loss: 0.6540 - accuracy: 0.4688 - val_loss: 0.6540 - val_accuracy: 0.5000
Epoch 9/50
4/4 [==============================] - 0s 85ms/step - loss: 0.6462 - accuracy: 0.4688 - val_loss: 0.6478 - val_accuracy: 0.5000
Epoch 10/50
4/4 [==============================] - 0s 84ms/step - loss: 0.6402 - accuracy: 0.4688 - val_loss: 0.6397 - val_accuracy: 0.5000
Epoch 11/50
4/4 [==============================] - 0s 84ms/step - loss: 0.6290 - accuracy: 0.5312 - val_loss: 0.6271 - val_accuracy: 0.5000
Epoch 12/50
4/4 [==============================] - 0s 84ms/step - loss: 0.6224 - accuracy: 0.5156 - val_loss: 0.6151 - val_accuracy: 0.5000
Epoch 13/50
4/4 [==============================] - 0s 83ms/step - loss: 0.6073 - accuracy: 0.5312 - val_loss: 0.6014 - val_accuracy: 0.5000
Epoch 14/50
4/4 [==============================] - 0s 86ms/step - loss: 0.5928 - accuracy: 0.6875 - val_loss: 0.5898 - val_accuracy: 0.9167
Epoch 15/50
4/4 [==============================] - 0s 84ms/step - loss: 0.5797 - accuracy: 0.7344 - val_loss: 0.5729 - val_accuracy: 0.9167
Epoch 16/50
4/4 [==============================] - 0s 83ms/step - loss: 0.5631 - accuracy: 0.7656 - val_loss: 0.5536 - val_accuracy: 0.9167
Epoch 17/50
4/4 [==============================] - 0s 83ms/step - loss: 0.5448 - accuracy: 0.8125 - val_loss: 0.5344 - val_accuracy: 0.9167
Epoch 18/50
4/4 [==============================] - 0s 84ms/step - loss: 0.5260 - accuracy: 0.8438 - val_loss: 0.5129 - val_accuracy: 0.9167
Epoch 19/50
4/4 [==============================] - 0s 85ms/step - loss: 0.5070 - accuracy: 0.8438 - val_loss: 0.4908 - val_accuracy: 0.9167
Epoch 20/50
4/4 [==============================] - 0s 83ms/step - loss: 0.4870 - accuracy: 0.8438 - val_loss: 0.4695 - val_accuracy: 1.0000
Epoch 21/50
4/4 [==============================] - 0s 84ms/step - loss: 0.4664 - accuracy: 0.8594 - val_loss: 0.4449 - val_accuracy: 1.0000
Epoch 22/50
4/4 [==============================] - 0s 86ms/step - loss: 0.4458 - accuracy: 0.8594 - val_loss: 0.4208 - val_accuracy: 1.0000
Epoch 23/50
4/4 [==============================] - 0s 84ms/step - loss: 0.4257 - accuracy: 0.8750 - val_loss: 0.3970 - val_accuracy: 1.0000
Epoch 24/50
4/4 [==============================] - 0s 85ms/step - loss: 0.4049 - accuracy: 0.8750 - val_loss: 0.3756 - val_accuracy: 1.0000
Epoch 25/50
4/4 [==============================] - 0s 85ms/step - loss: 0.3851 - accuracy: 0.9062 - val_loss: 0.3532 - val_accuracy: 1.0000
Epoch 26/50
4/4 [==============================] - 0s 84ms/step - loss: 0.3670 - accuracy: 0.9062 - val_loss: 0.3330 - val_accuracy: 1.0000
Epoch 27/50
4/4 [==============================] - 0s 84ms/step - loss: 0.3480 - accuracy: 0.9219 - val_loss: 0.3121 - val_accuracy: 1.0000
Epoch 28/50
4/4 [==============================] - 0s 85ms/step - loss: 0.3320 - accuracy: 0.9062 - val_loss: 0.2924 - val_accuracy: 1.0000
Epoch 29/50
4/4 [==============================] - 0s 84ms/step - loss: 0.3164 - accuracy: 0.9062 - val_loss: 0.2769 - val_accuracy: 1.0000
Epoch 30/50
4/4 [==============================] - 0s 86ms/step - loss: 0.3027 - accuracy: 0.9375 - val_loss: 0.2614 - val_accuracy: 1.0000
Epoch 31/50
4/4 [==============================] - 0s 85ms/step - loss: 0.2891 - accuracy: 0.9219 - val_loss: 0.2437 - val_accuracy: 1.0000
Epoch 32/50
4/4 [==============================] - 0s 83ms/step - loss: 0.2759 - accuracy: 0.9219 - val_loss: 0.2296 - val_accuracy: 1.0000
Epoch 33/50
4/4 [==============================] - 0s 84ms/step - loss: 0.2646 - accuracy: 0.9375 - val_loss: 0.2209 - val_accuracy: 1.0000
Epoch 34/50
4/4 [==============================] - 0s 84ms/step - loss: 0.2540 - accuracy: 0.9375 - val_loss: 0.2056 - val_accuracy: 1.0000
Epoch 35/50
4/4 [==============================] - 0s 84ms/step - loss: 0.2432 - accuracy: 0.9375 - val_loss: 0.1937 - val_accuracy: 1.0000
Epoch 36/50
4/4 [==============================] - 0s 85ms/step - loss: 0.2330 - accuracy: 0.9375 - val_loss: 0.1857 - val_accuracy: 1.0000
Epoch 37/50
4/4 [==============================] - 0s 84ms/step - loss: 0.2250 - accuracy: 0.9531 - val_loss: 0.1761 - val_accuracy: 1.0000
Epoch 38/50
4/4 [==============================] - 0s 87ms/step - loss: 0.2158 - accuracy: 0.9531 - val_loss: 0.1659 - val_accuracy: 1.0000
Epoch 39/50
4/4 [==============================] - 0s 83ms/step - loss: 0.2091 - accuracy: 0.9531 - val_loss: 0.1578 - val_accuracy: 1.0000
Epoch 40/50
4/4 [==============================] - 0s 84ms/step - loss: 0.2026 - accuracy: 0.9375 - val_loss: 0.1503 - val_accuracy: 1.0000
Epoch 41/50
4/4 [==============================] - 0s 85ms/step - loss: 0.1966 - accuracy: 0.9531 - val_loss: 0.1478 - val_accuracy: 1.0000
Epoch 42/50
4/4 [==============================] - 0s 84ms/step - loss: 0.1895 - accuracy: 0.9688 - val_loss: 0.1389 - val_accuracy: 1.0000
Epoch 43/50
4/4 [==============================] - 0s 84ms/step - loss: 0.1840 - accuracy: 0.9531 - val_loss: 0.1294 - val_accuracy: 1.0000
Epoch 44/50
4/4 [==============================] - 0s 85ms/step - loss: 0.1824 - accuracy: 0.9531 - val_loss: 0.1278 - val_accuracy: 1.0000
Epoch 45/50
4/4 [==============================] - 0s 85ms/step - loss: 0.1738 - accuracy: 0.9531 - val_loss: 0.1209 - val_accuracy: 1.0000
Epoch 46/50
4/4 [==============================] - 0s 83ms/step - loss: 0.1707 - accuracy: 0.9531 - val_loss: 0.1146 - val_accuracy: 1.0000
Epoch 47/50
4/4 [==============================] - 0s 84ms/step - loss: 0.1655 - accuracy: 0.9688 - val_loss: 0.1131 - val_accuracy: 1.0000
Epoch 48/50
4/4 [==============================] - 0s 84ms/step - loss: 0.1638 - accuracy: 0.9531 - val_loss: 0.1049 - val_accuracy: 1.0000
Epoch 49/50
4/4 [==============================] - 0s 84ms/step - loss: 0.1567 - accuracy: 0.9531 - val_loss: 0.1058 - val_accuracy: 1.0000
Epoch 50/50
4/4 [==============================] - 0s 86ms/step - loss: 0.1553 - accuracy: 0.9531 - val_loss: 0.0992 - val_accuracy: 1.0000

And explore the results and accuracy:

loss_plotter = tfdocs.plots.HistoryPlotter(metric = 'loss', smoothing_std=10)
loss_plotter.plot(training_histories)

png

acc_plotter = tfdocs.plots.HistoryPlotter(metric = 'accuracy', smoothing_std=10)
acc_plotter.plot(training_histories)

png

4.2 Noisy comparison

Now you can build a new model with noisy structure and compare to the above, the code is nearly identical:

depolarize_p = 0.001
n_epochs = 50
noisy_phase_classifier = build_keras_model(qubits, depolarize_p)

noisy_phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
                   loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                   metrics=['accuracy'])


# Show the keras plot of the model
tf.keras.utils.plot_model(noisy_phase_classifier, show_shapes=True, dpi=70)

png

noisy_data, noisy_labels = get_data(qubits, depolarize_p)
training_histories['noisy'] = noisy_phase_classifier.fit(x=noisy_data,
                         y=noisy_labels,
                         batch_size=16,
                         epochs=n_epochs,
                         validation_split=0.15,
                         verbose=1)
Epoch 1/50
4/4 [==============================] - 11s 2s/step - loss: 0.7105 - accuracy: 0.4844 - val_loss: 0.7093 - val_accuracy: 0.6667
Epoch 2/50
4/4 [==============================] - 7s 2s/step - loss: 0.6905 - accuracy: 0.4844 - val_loss: 0.6903 - val_accuracy: 0.5000
Epoch 3/50
4/4 [==============================] - 7s 2s/step - loss: 0.6759 - accuracy: 0.4688 - val_loss: 0.6777 - val_accuracy: 0.5000
Epoch 4/50
4/4 [==============================] - 7s 2s/step - loss: 0.6664 - accuracy: 0.4688 - val_loss: 0.6682 - val_accuracy: 0.5000
Epoch 5/50
4/4 [==============================] - 7s 2s/step - loss: 0.6566 - accuracy: 0.4688 - val_loss: 0.6606 - val_accuracy: 0.5000
Epoch 6/50
4/4 [==============================] - 7s 2s/step - loss: 0.6527 - accuracy: 0.4688 - val_loss: 0.6507 - val_accuracy: 0.5000
Epoch 7/50
4/4 [==============================] - 7s 2s/step - loss: 0.6360 - accuracy: 0.4688 - val_loss: 0.6400 - val_accuracy: 0.5000
Epoch 8/50
4/4 [==============================] - 7s 2s/step - loss: 0.6350 - accuracy: 0.4688 - val_loss: 0.6314 - val_accuracy: 0.5000
Epoch 9/50
4/4 [==============================] - 7s 2s/step - loss: 0.6206 - accuracy: 0.4844 - val_loss: 0.6213 - val_accuracy: 0.5833
Epoch 10/50
4/4 [==============================] - 7s 2s/step - loss: 0.6123 - accuracy: 0.5000 - val_loss: 0.6042 - val_accuracy: 0.5833
Epoch 11/50
4/4 [==============================] - 7s 2s/step - loss: 0.5887 - accuracy: 0.5938 - val_loss: 0.6056 - val_accuracy: 0.7500
Epoch 12/50
4/4 [==============================] - 7s 2s/step - loss: 0.5848 - accuracy: 0.6250 - val_loss: 0.5791 - val_accuracy: 0.8333
Epoch 13/50
4/4 [==============================] - 7s 2s/step - loss: 0.5689 - accuracy: 0.6562 - val_loss: 0.5696 - val_accuracy: 0.8333
Epoch 14/50
4/4 [==============================] - 7s 2s/step - loss: 0.5556 - accuracy: 0.7188 - val_loss: 0.5649 - val_accuracy: 0.9167
Epoch 15/50
4/4 [==============================] - 7s 2s/step - loss: 0.5403 - accuracy: 0.7500 - val_loss: 0.5862 - val_accuracy: 0.7500
Epoch 16/50
4/4 [==============================] - 7s 2s/step - loss: 0.5172 - accuracy: 0.7656 - val_loss: 0.5367 - val_accuracy: 0.7500
Epoch 17/50
4/4 [==============================] - 7s 2s/step - loss: 0.5119 - accuracy: 0.7656 - val_loss: 0.5128 - val_accuracy: 0.9167
Epoch 18/50
4/4 [==============================] - 7s 2s/step - loss: 0.4856 - accuracy: 0.8750 - val_loss: 0.5188 - val_accuracy: 0.9167
Epoch 19/50
4/4 [==============================] - 7s 2s/step - loss: 0.4746 - accuracy: 0.7969 - val_loss: 0.4890 - val_accuracy: 0.9167
Epoch 20/50
4/4 [==============================] - 7s 2s/step - loss: 0.4544 - accuracy: 0.7812 - val_loss: 0.4178 - val_accuracy: 0.9167
Epoch 21/50
4/4 [==============================] - 7s 2s/step - loss: 0.4134 - accuracy: 0.8750 - val_loss: 0.4601 - val_accuracy: 0.9167
Epoch 22/50
4/4 [==============================] - 7s 2s/step - loss: 0.4445 - accuracy: 0.8281 - val_loss: 0.4099 - val_accuracy: 0.9167
Epoch 23/50
4/4 [==============================] - 7s 2s/step - loss: 0.3826 - accuracy: 0.9062 - val_loss: 0.4481 - val_accuracy: 0.8333
Epoch 24/50
4/4 [==============================] - 7s 2s/step - loss: 0.3854 - accuracy: 0.8750 - val_loss: 0.3954 - val_accuracy: 0.9167
Epoch 25/50
4/4 [==============================] - 7s 2s/step - loss: 0.3808 - accuracy: 0.8281 - val_loss: 0.3374 - val_accuracy: 0.9167
Epoch 26/50
4/4 [==============================] - 7s 2s/step - loss: 0.3488 - accuracy: 0.8438 - val_loss: 0.3693 - val_accuracy: 0.9167
Epoch 27/50
4/4 [==============================] - 7s 2s/step - loss: 0.3399 - accuracy: 0.8750 - val_loss: 0.3323 - val_accuracy: 0.9167
Epoch 28/50
4/4 [==============================] - 7s 2s/step - loss: 0.3219 - accuracy: 0.9219 - val_loss: 0.3695 - val_accuracy: 0.9167
Epoch 29/50
4/4 [==============================] - 7s 2s/step - loss: 0.3061 - accuracy: 0.9688 - val_loss: 0.3007 - val_accuracy: 1.0000
Epoch 30/50
4/4 [==============================] - 7s 2s/step - loss: 0.2904 - accuracy: 0.8906 - val_loss: 0.2723 - val_accuracy: 1.0000
Epoch 31/50
4/4 [==============================] - 7s 2s/step - loss: 0.2568 - accuracy: 0.9375 - val_loss: 0.3741 - val_accuracy: 0.8333
Epoch 32/50
4/4 [==============================] - 7s 2s/step - loss: 0.2688 - accuracy: 0.9062 - val_loss: 0.3242 - val_accuracy: 0.9167
Epoch 33/50
4/4 [==============================] - 7s 2s/step - loss: 0.2425 - accuracy: 0.9219 - val_loss: 0.3539 - val_accuracy: 0.9167
Epoch 34/50
4/4 [==============================] - 7s 2s/step - loss: 0.2350 - accuracy: 0.9844 - val_loss: 0.2606 - val_accuracy: 1.0000
Epoch 35/50
4/4 [==============================] - 7s 2s/step - loss: 0.2719 - accuracy: 0.8750 - val_loss: 0.2799 - val_accuracy: 1.0000
Epoch 36/50
4/4 [==============================] - 7s 2s/step - loss: 0.2522 - accuracy: 0.9062 - val_loss: 0.3099 - val_accuracy: 1.0000
Epoch 37/50
4/4 [==============================] - 7s 2s/step - loss: 0.2506 - accuracy: 0.9219 - val_loss: 0.2458 - val_accuracy: 0.9167
Epoch 38/50
4/4 [==============================] - 7s 2s/step - loss: 0.2247 - accuracy: 0.9219 - val_loss: 0.2818 - val_accuracy: 0.9167
Epoch 39/50
4/4 [==============================] - 7s 2s/step - loss: 0.2757 - accuracy: 0.8750 - val_loss: 0.2683 - val_accuracy: 1.0000
Epoch 40/50
4/4 [==============================] - 7s 2s/step - loss: 0.2154 - accuracy: 0.9688 - val_loss: 0.3112 - val_accuracy: 0.9167
Epoch 41/50
4/4 [==============================] - 7s 2s/step - loss: 0.2028 - accuracy: 0.9375 - val_loss: 0.3427 - val_accuracy: 0.8333
Epoch 42/50
4/4 [==============================] - 7s 2s/step - loss: 0.2027 - accuracy: 0.9531 - val_loss: 0.2447 - val_accuracy: 0.9167
Epoch 43/50
4/4 [==============================] - 7s 2s/step - loss: 0.1957 - accuracy: 0.9531 - val_loss: 0.2053 - val_accuracy: 1.0000
Epoch 44/50
4/4 [==============================] - 7s 2s/step - loss: 0.2013 - accuracy: 0.9375 - val_loss: 0.3060 - val_accuracy: 0.9167
Epoch 45/50
4/4 [==============================] - 7s 2s/step - loss: 0.1953 - accuracy: 0.9531 - val_loss: 0.2424 - val_accuracy: 1.0000
Epoch 46/50
4/4 [==============================] - 7s 2s/step - loss: 0.1699 - accuracy: 0.9375 - val_loss: 0.1700 - val_accuracy: 0.9167
Epoch 47/50
4/4 [==============================] - 7s 2s/step - loss: 0.1689 - accuracy: 0.9688 - val_loss: 0.2353 - val_accuracy: 0.9167
Epoch 48/50
4/4 [==============================] - 7s 2s/step - loss: 0.1700 - accuracy: 0.9375 - val_loss: 0.2262 - val_accuracy: 0.9167
Epoch 49/50
4/4 [==============================] - 7s 2s/step - loss: 0.1507 - accuracy: 0.9844 - val_loss: 0.1694 - val_accuracy: 1.0000
Epoch 50/50
4/4 [==============================] - 7s 2s/step - loss: 0.1372 - accuracy: 0.9844 - val_loss: 0.1854 - val_accuracy: 1.0000
loss_plotter.plot(training_histories)

png

acc_plotter.plot(training_histories)

png