TensorFlow 2.0 Beta is available Learn more

Explore overfitting and underfitting

View on TensorFlow.org View source on GitHub Download notebook

As always, the code in this example will use the tf.keras API, which you can learn more about in the TensorFlow Keras guide.

In both of the previous examples—classifying movie reviews and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.

In other words, our model would overfit to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the training set, what we really want is to develop models that generalize well to a testing set (or data they haven't seen before).

The opposite of overfitting is underfitting. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.

If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.

To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.

In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.

from __future__ import absolute_import, division, print_function, unicode_literals

try:
  # %tensorflow_version only exists in Colab.
  %tensorflow_version 2.x
except Exception:
  pass
import tensorflow as tf
from tensorflow import keras

import numpy as np
import matplotlib.pyplot as plt

print(tf.__version__)
2.0.0-beta1

Download the IMDB dataset

Rather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.

Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence [3, 5] into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.

NUM_WORDS = 10000

(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)

def multi_hot_sequences(sequences, dimension):
    # Create an all-zero matrix of shape (len(sequences), dimension)
    results = np.zeros((len(sequences), dimension))
    for i, word_indices in enumerate(sequences):
        results[i, word_indices] = 1.0  # set specific indices of results[i] to 1s
    return results


train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)

Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:

plt.plot(train_data[0])
[<matplotlib.lines.Line2D at 0x7f42bd4e5390>]

png

Demonstrate overfitting

The simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.

Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.

On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".

Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.

To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.

We'll create a simple model using only Dense layers as a baseline, then create smaller and larger versions, and compare them.

Create a baseline model

baseline_model = keras.Sequential([
    # `input_shape` is only required here so that `.summary` works.
    keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
    keras.layers.Dense(16, activation='relu'),
    keras.layers.Dense(1, activation='sigmoid')
])

baseline_model.compile(optimizer='adam',
                       loss='binary_crossentropy',
                       metrics=['accuracy', 'binary_crossentropy'])

baseline_model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 16)                160016    
_________________________________________________________________
dense_1 (Dense)              (None, 16)                272       
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,305
Trainable params: 160,305
Non-trainable params: 0
_________________________________________________________________
baseline_history = baseline_model.fit(train_data,
                                      train_labels,
                                      epochs=20,
                                      batch_size=512,
                                      validation_data=(test_data, test_labels),
                                      verbose=2)
WARNING: Logging before flag parsing goes to stderr.
W0813 05:50:11.755010 139926896760576 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where

Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 - 5s - loss: 0.4979 - accuracy: 0.7915 - binary_crossentropy: 0.4979 - val_loss: 0.3383 - val_accuracy: 0.8744 - val_binary_crossentropy: 0.3383
Epoch 2/20
25000/25000 - 4s - loss: 0.2480 - accuracy: 0.9108 - binary_crossentropy: 0.2480 - val_loss: 0.2838 - val_accuracy: 0.8886 - val_binary_crossentropy: 0.2838
Epoch 3/20
25000/25000 - 4s - loss: 0.1794 - accuracy: 0.9370 - binary_crossentropy: 0.1794 - val_loss: 0.2927 - val_accuracy: 0.8829 - val_binary_crossentropy: 0.2927
Epoch 4/20
25000/25000 - 4s - loss: 0.1463 - accuracy: 0.9504 - binary_crossentropy: 0.1463 - val_loss: 0.3251 - val_accuracy: 0.8736 - val_binary_crossentropy: 0.3251
Epoch 5/20
25000/25000 - 4s - loss: 0.1211 - accuracy: 0.9601 - binary_crossentropy: 0.1211 - val_loss: 0.3374 - val_accuracy: 0.8745 - val_binary_crossentropy: 0.3374
Epoch 6/20
25000/25000 - 4s - loss: 0.1011 - accuracy: 0.9689 - binary_crossentropy: 0.1011 - val_loss: 0.3670 - val_accuracy: 0.8712 - val_binary_crossentropy: 0.3670
Epoch 7/20
25000/25000 - 4s - loss: 0.0853 - accuracy: 0.9745 - binary_crossentropy: 0.0853 - val_loss: 0.4009 - val_accuracy: 0.8672 - val_binary_crossentropy: 0.4009
Epoch 8/20
25000/25000 - 4s - loss: 0.0736 - accuracy: 0.9797 - binary_crossentropy: 0.0736 - val_loss: 0.4381 - val_accuracy: 0.8632 - val_binary_crossentropy: 0.4381
Epoch 9/20
25000/25000 - 4s - loss: 0.0606 - accuracy: 0.9843 - binary_crossentropy: 0.0606 - val_loss: 0.4751 - val_accuracy: 0.8616 - val_binary_crossentropy: 0.4751
Epoch 10/20
25000/25000 - 4s - loss: 0.0523 - accuracy: 0.9873 - binary_crossentropy: 0.0523 - val_loss: 0.5151 - val_accuracy: 0.8574 - val_binary_crossentropy: 0.5151
Epoch 11/20
25000/25000 - 4s - loss: 0.0424 - accuracy: 0.9914 - binary_crossentropy: 0.0424 - val_loss: 0.5557 - val_accuracy: 0.8562 - val_binary_crossentropy: 0.5557
Epoch 12/20
25000/25000 - 4s - loss: 0.0345 - accuracy: 0.9939 - binary_crossentropy: 0.0345 - val_loss: 0.5978 - val_accuracy: 0.8545 - val_binary_crossentropy: 0.5978
Epoch 13/20
25000/25000 - 4s - loss: 0.0284 - accuracy: 0.9956 - binary_crossentropy: 0.0284 - val_loss: 0.6437 - val_accuracy: 0.8521 - val_binary_crossentropy: 0.6437
Epoch 14/20
25000/25000 - 4s - loss: 0.0225 - accuracy: 0.9973 - binary_crossentropy: 0.0225 - val_loss: 0.6836 - val_accuracy: 0.8497 - val_binary_crossentropy: 0.6836
Epoch 15/20
25000/25000 - 4s - loss: 0.0181 - accuracy: 0.9982 - binary_crossentropy: 0.0181 - val_loss: 0.7141 - val_accuracy: 0.8494 - val_binary_crossentropy: 0.7141
Epoch 16/20
25000/25000 - 4s - loss: 0.0144 - accuracy: 0.9990 - binary_crossentropy: 0.0144 - val_loss: 0.7568 - val_accuracy: 0.8488 - val_binary_crossentropy: 0.7568
Epoch 17/20
25000/25000 - 4s - loss: 0.0119 - accuracy: 0.9994 - binary_crossentropy: 0.0119 - val_loss: 0.7892 - val_accuracy: 0.8488 - val_binary_crossentropy: 0.7892
Epoch 18/20
25000/25000 - 4s - loss: 0.0096 - accuracy: 0.9996 - binary_crossentropy: 0.0096 - val_loss: 0.8203 - val_accuracy: 0.8474 - val_binary_crossentropy: 0.8203
Epoch 19/20
25000/25000 - 4s - loss: 0.0077 - accuracy: 0.9998 - binary_crossentropy: 0.0077 - val_loss: 0.8520 - val_accuracy: 0.8478 - val_binary_crossentropy: 0.8520
Epoch 20/20
25000/25000 - 4s - loss: 0.0064 - accuracy: 0.9998 - binary_crossentropy: 0.0064 - val_loss: 0.8792 - val_accuracy: 0.8477 - val_binary_crossentropy: 0.8792

Create a smaller model

Let's create a model with less hidden units to compare against the baseline model that we just created:

smaller_model = keras.Sequential([
    keras.layers.Dense(4, activation='relu', input_shape=(NUM_WORDS,)),
    keras.layers.Dense(4, activation='relu'),
    keras.layers.Dense(1, activation='sigmoid')
])

smaller_model.compile(optimizer='adam',
                      loss='binary_crossentropy',
                      metrics=['accuracy', 'binary_crossentropy'])

smaller_model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_3 (Dense)              (None, 4)                 40004     
_________________________________________________________________
dense_4 (Dense)              (None, 4)                 20        
_________________________________________________________________
dense_5 (Dense)              (None, 1)                 5         
=================================================================
Total params: 40,029
Trainable params: 40,029
Non-trainable params: 0
_________________________________________________________________

And train the model using the same data:

smaller_history = smaller_model.fit(train_data,
                                    train_labels,
                                    epochs=20,
                                    batch_size=512,
                                    validation_data=(test_data, test_labels),
                                    verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 - 4s - loss: 0.6349 - accuracy: 0.6216 - binary_crossentropy: 0.6349 - val_loss: 0.5840 - val_accuracy: 0.7680 - val_binary_crossentropy: 0.5840
Epoch 2/20
25000/25000 - 3s - loss: 0.5333 - accuracy: 0.7988 - binary_crossentropy: 0.5333 - val_loss: 0.5211 - val_accuracy: 0.8184 - val_binary_crossentropy: 0.5211
Epoch 3/20
25000/25000 - 3s - loss: 0.4752 - accuracy: 0.8606 - binary_crossentropy: 0.4752 - val_loss: 0.4858 - val_accuracy: 0.8541 - val_binary_crossentropy: 0.4858
Epoch 4/20
25000/25000 - 3s - loss: 0.4352 - accuracy: 0.8930 - binary_crossentropy: 0.4352 - val_loss: 0.4628 - val_accuracy: 0.8654 - val_binary_crossentropy: 0.4628
Epoch 5/20
25000/25000 - 3s - loss: 0.4029 - accuracy: 0.9138 - binary_crossentropy: 0.4029 - val_loss: 0.4480 - val_accuracy: 0.8691 - val_binary_crossentropy: 0.4480
Epoch 6/20
25000/25000 - 3s - loss: 0.3758 - accuracy: 0.9294 - binary_crossentropy: 0.3758 - val_loss: 0.4394 - val_accuracy: 0.8684 - val_binary_crossentropy: 0.4394
Epoch 7/20
25000/25000 - 3s - loss: 0.3515 - accuracy: 0.9422 - binary_crossentropy: 0.3515 - val_loss: 0.4339 - val_accuracy: 0.8693 - val_binary_crossentropy: 0.4339
Epoch 8/20
25000/25000 - 3s - loss: 0.3299 - accuracy: 0.9524 - binary_crossentropy: 0.3299 - val_loss: 0.4242 - val_accuracy: 0.8774 - val_binary_crossentropy: 0.4242
Epoch 9/20
25000/25000 - 3s - loss: 0.3108 - accuracy: 0.9600 - binary_crossentropy: 0.3108 - val_loss: 0.4170 - val_accuracy: 0.8803 - val_binary_crossentropy: 0.4170
Epoch 10/20
25000/25000 - 3s - loss: 0.2936 - accuracy: 0.9665 - binary_crossentropy: 0.2936 - val_loss: 0.4223 - val_accuracy: 0.8746 - val_binary_crossentropy: 0.4223
Epoch 11/20
25000/25000 - 3s - loss: 0.2773 - accuracy: 0.9712 - binary_crossentropy: 0.2773 - val_loss: 0.4218 - val_accuracy: 0.8742 - val_binary_crossentropy: 0.4218
Epoch 12/20
25000/25000 - 3s - loss: 0.2628 - accuracy: 0.9761 - binary_crossentropy: 0.2628 - val_loss: 0.4245 - val_accuracy: 0.8732 - val_binary_crossentropy: 0.4245
Epoch 13/20
25000/25000 - 3s - loss: 0.2494 - accuracy: 0.9796 - binary_crossentropy: 0.2494 - val_loss: 0.4192 - val_accuracy: 0.8737 - val_binary_crossentropy: 0.4192
Epoch 14/20
25000/25000 - 3s - loss: 0.2374 - accuracy: 0.9819 - binary_crossentropy: 0.2374 - val_loss: 0.4311 - val_accuracy: 0.8708 - val_binary_crossentropy: 0.4311
Epoch 15/20
25000/25000 - 3s - loss: 0.2260 - accuracy: 0.9839 - binary_crossentropy: 0.2260 - val_loss: 0.4391 - val_accuracy: 0.8690 - val_binary_crossentropy: 0.4391
Epoch 16/20
25000/25000 - 3s - loss: 0.2156 - accuracy: 0.9856 - binary_crossentropy: 0.2156 - val_loss: 0.4406 - val_accuracy: 0.8691 - val_binary_crossentropy: 0.4406
Epoch 17/20
25000/25000 - 3s - loss: 0.2062 - accuracy: 0.9870 - binary_crossentropy: 0.2062 - val_loss: 0.4521 - val_accuracy: 0.8666 - val_binary_crossentropy: 0.4521
Epoch 18/20
25000/25000 - 3s - loss: 0.1973 - accuracy: 0.9884 - binary_crossentropy: 0.1973 - val_loss: 0.4560 - val_accuracy: 0.8670 - val_binary_crossentropy: 0.4560
Epoch 19/20
25000/25000 - 3s - loss: 0.1894 - accuracy: 0.9890 - binary_crossentropy: 0.1894 - val_loss: 0.4612 - val_accuracy: 0.8664 - val_binary_crossentropy: 0.4612
Epoch 20/20
25000/25000 - 3s - loss: 0.1820 - accuracy: 0.9895 - binary_crossentropy: 0.1820 - val_loss: 0.4602 - val_accuracy: 0.8671 - val_binary_crossentropy: 0.4602

Create a bigger model

As an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:

bigger_model = keras.models.Sequential([
    keras.layers.Dense(512, activation='relu', input_shape=(NUM_WORDS,)),
    keras.layers.Dense(512, activation='relu'),
    keras.layers.Dense(1, activation='sigmoid')
])

bigger_model.compile(optimizer='adam',
                     loss='binary_crossentropy',
                     metrics=['accuracy','binary_crossentropy'])

bigger_model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_6 (Dense)              (None, 512)               5120512   
_________________________________________________________________
dense_7 (Dense)              (None, 512)               262656    
_________________________________________________________________
dense_8 (Dense)              (None, 1)                 513       
=================================================================
Total params: 5,383,681
Trainable params: 5,383,681
Non-trainable params: 0
_________________________________________________________________

And, again, train the model using the same data:

bigger_history = bigger_model.fit(train_data, train_labels,
                                  epochs=20,
                                  batch_size=512,
                                  validation_data=(test_data, test_labels),
                                  verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 - 3s - loss: 0.3483 - accuracy: 0.8501 - binary_crossentropy: 0.3483 - val_loss: 0.3068 - val_accuracy: 0.8729 - val_binary_crossentropy: 0.3068
Epoch 2/20
25000/25000 - 3s - loss: 0.1422 - accuracy: 0.9485 - binary_crossentropy: 0.1422 - val_loss: 0.3259 - val_accuracy: 0.8753 - val_binary_crossentropy: 0.3259
Epoch 3/20
25000/25000 - 3s - loss: 0.0435 - accuracy: 0.9876 - binary_crossentropy: 0.0435 - val_loss: 0.4349 - val_accuracy: 0.8697 - val_binary_crossentropy: 0.4349
Epoch 4/20
25000/25000 - 3s - loss: 0.0060 - accuracy: 0.9990 - binary_crossentropy: 0.0060 - val_loss: 0.5830 - val_accuracy: 0.8694 - val_binary_crossentropy: 0.5830
Epoch 5/20
25000/25000 - 4s - loss: 6.3809e-04 - accuracy: 1.0000 - binary_crossentropy: 6.3809e-04 - val_loss: 0.6760 - val_accuracy: 0.8709 - val_binary_crossentropy: 0.6760
Epoch 6/20
25000/25000 - 3s - loss: 1.7646e-04 - accuracy: 1.0000 - binary_crossentropy: 1.7646e-04 - val_loss: 0.7133 - val_accuracy: 0.8710 - val_binary_crossentropy: 0.7133
Epoch 7/20
25000/25000 - 3s - loss: 1.1119e-04 - accuracy: 1.0000 - binary_crossentropy: 1.1119e-04 - val_loss: 0.7378 - val_accuracy: 0.8714 - val_binary_crossentropy: 0.7378
Epoch 8/20
25000/25000 - 3s - loss: 8.0311e-05 - accuracy: 1.0000 - binary_crossentropy: 8.0311e-05 - val_loss: 0.7566 - val_accuracy: 0.8714 - val_binary_crossentropy: 0.7566
Epoch 9/20
25000/25000 - 3s - loss: 6.1655e-05 - accuracy: 1.0000 - binary_crossentropy: 6.1655e-05 - val_loss: 0.7714 - val_accuracy: 0.8714 - val_binary_crossentropy: 0.7714
Epoch 10/20
25000/25000 - 3s - loss: 4.9191e-05 - accuracy: 1.0000 - binary_crossentropy: 4.9191e-05 - val_loss: 0.7840 - val_accuracy: 0.8713 - val_binary_crossentropy: 0.7840
Epoch 11/20
25000/25000 - 3s - loss: 4.0094e-05 - accuracy: 1.0000 - binary_crossentropy: 4.0094e-05 - val_loss: 0.7957 - val_accuracy: 0.8713 - val_binary_crossentropy: 0.7957
Epoch 12/20
25000/25000 - 3s - loss: 3.3443e-05 - accuracy: 1.0000 - binary_crossentropy: 3.3443e-05 - val_loss: 0.8056 - val_accuracy: 0.8714 - val_binary_crossentropy: 0.8056
Epoch 13/20
25000/25000 - 3s - loss: 2.8158e-05 - accuracy: 1.0000 - binary_crossentropy: 2.8158e-05 - val_loss: 0.8145 - val_accuracy: 0.8712 - val_binary_crossentropy: 0.8145
Epoch 14/20
25000/25000 - 3s - loss: 2.4056e-05 - accuracy: 1.0000 - binary_crossentropy: 2.4056e-05 - val_loss: 0.8235 - val_accuracy: 0.8710 - val_binary_crossentropy: 0.8235
Epoch 15/20
25000/25000 - 3s - loss: 2.0726e-05 - accuracy: 1.0000 - binary_crossentropy: 2.0726e-05 - val_loss: 0.8311 - val_accuracy: 0.8710 - val_binary_crossentropy: 0.8311
Epoch 16/20
25000/25000 - 3s - loss: 1.8032e-05 - accuracy: 1.0000 - binary_crossentropy: 1.8032e-05 - val_loss: 0.8388 - val_accuracy: 0.8708 - val_binary_crossentropy: 0.8388
Epoch 17/20
25000/25000 - 3s - loss: 1.5763e-05 - accuracy: 1.0000 - binary_crossentropy: 1.5763e-05 - val_loss: 0.8460 - val_accuracy: 0.8708 - val_binary_crossentropy: 0.8460
Epoch 18/20
25000/25000 - 3s - loss: 1.3861e-05 - accuracy: 1.0000 - binary_crossentropy: 1.3861e-05 - val_loss: 0.8528 - val_accuracy: 0.8709 - val_binary_crossentropy: 0.8528
Epoch 19/20
25000/25000 - 3s - loss: 1.2243e-05 - accuracy: 1.0000 - binary_crossentropy: 1.2243e-05 - val_loss: 0.8592 - val_accuracy: 0.8708 - val_binary_crossentropy: 0.8592
Epoch 20/20
25000/25000 - 3s - loss: 1.0861e-05 - accuracy: 1.0000 - binary_crossentropy: 1.0861e-05 - val_loss: 0.8656 - val_accuracy: 0.8709 - val_binary_crossentropy: 0.8656

Plot the training and validation loss

The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.

def plot_history(histories, key='binary_crossentropy'):
  plt.figure(figsize=(16,10))

  for name, history in histories:
    val = plt.plot(history.epoch, history.history['val_'+key],
                   '--', label=name.title()+' Val')
    plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
             label=name.title()+' Train')

  plt.xlabel('Epochs')
  plt.ylabel(key.replace('_',' ').title())
  plt.legend()

  plt.xlim([0,max(history.epoch)])


plot_history([('baseline', baseline_history),
              ('smaller', smaller_history),
              ('bigger', bigger_history)])

png

Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss).

Strategies to prevent overfitting

Add weight regularization

You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.

A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:

  • L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).

  • L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.

L1 regularization introduces sparsity to make some of your weight parameters zero. L2 regularization will penalize the weights parameters without making them sparse—one reason why L2 is more common.

In tf.keras, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.

l2_model = keras.models.Sequential([
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
                       activation='relu', input_shape=(NUM_WORDS,)),
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
                       activation='relu'),
    keras.layers.Dense(1, activation='sigmoid')
])

l2_model.compile(optimizer='adam',
                 loss='binary_crossentropy',
                 metrics=['accuracy', 'binary_crossentropy'])

l2_model_history = l2_model.fit(train_data, train_labels,
                                epochs=20,
                                batch_size=512,
                                validation_data=(test_data, test_labels),
                                verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 - 4s - loss: 0.5124 - accuracy: 0.8198 - binary_crossentropy: 0.4699 - val_loss: 0.3749 - val_accuracy: 0.8774 - val_binary_crossentropy: 0.3294
Epoch 2/20
25000/25000 - 4s - loss: 0.3010 - accuracy: 0.9092 - binary_crossentropy: 0.2513 - val_loss: 0.3389 - val_accuracy: 0.8871 - val_binary_crossentropy: 0.2861
Epoch 3/20
25000/25000 - 4s - loss: 0.2525 - accuracy: 0.9302 - binary_crossentropy: 0.1972 - val_loss: 0.3417 - val_accuracy: 0.8855 - val_binary_crossentropy: 0.2848
Epoch 4/20
25000/25000 - 4s - loss: 0.2314 - accuracy: 0.9397 - binary_crossentropy: 0.1727 - val_loss: 0.3602 - val_accuracy: 0.8794 - val_binary_crossentropy: 0.3003
Epoch 5/20
25000/25000 - 4s - loss: 0.2165 - accuracy: 0.9472 - binary_crossentropy: 0.1555 - val_loss: 0.3705 - val_accuracy: 0.8769 - val_binary_crossentropy: 0.3087
Epoch 6/20
25000/25000 - 4s - loss: 0.2049 - accuracy: 0.9525 - binary_crossentropy: 0.1421 - val_loss: 0.3830 - val_accuracy: 0.8757 - val_binary_crossentropy: 0.3197
Epoch 7/20
25000/25000 - 4s - loss: 0.1979 - accuracy: 0.9540 - binary_crossentropy: 0.1338 - val_loss: 0.4028 - val_accuracy: 0.8720 - val_binary_crossentropy: 0.3381
Epoch 8/20
25000/25000 - 4s - loss: 0.1917 - accuracy: 0.9570 - binary_crossentropy: 0.1265 - val_loss: 0.4154 - val_accuracy: 0.8707 - val_binary_crossentropy: 0.3498
Epoch 9/20
25000/25000 - 4s - loss: 0.1861 - accuracy: 0.9576 - binary_crossentropy: 0.1197 - val_loss: 0.4306 - val_accuracy: 0.8665 - val_binary_crossentropy: 0.3638
Epoch 10/20
25000/25000 - 4s - loss: 0.1815 - accuracy: 0.9610 - binary_crossentropy: 0.1144 - val_loss: 0.4409 - val_accuracy: 0.8665 - val_binary_crossentropy: 0.3733
Epoch 11/20
25000/25000 - 3s - loss: 0.1764 - accuracy: 0.9632 - binary_crossentropy: 0.1086 - val_loss: 0.4533 - val_accuracy: 0.8652 - val_binary_crossentropy: 0.3850
Epoch 12/20
25000/25000 - 4s - loss: 0.1731 - accuracy: 0.9628 - binary_crossentropy: 0.1044 - val_loss: 0.4673 - val_accuracy: 0.8628 - val_binary_crossentropy: 0.3984
Epoch 13/20
25000/25000 - 4s - loss: 0.1686 - accuracy: 0.9670 - binary_crossentropy: 0.0994 - val_loss: 0.4785 - val_accuracy: 0.8628 - val_binary_crossentropy: 0.4091
Epoch 14/20
25000/25000 - 4s - loss: 0.1662 - accuracy: 0.9668 - binary_crossentropy: 0.0965 - val_loss: 0.4946 - val_accuracy: 0.8595 - val_binary_crossentropy: 0.4246
Epoch 15/20
25000/25000 - 3s - loss: 0.1656 - accuracy: 0.9668 - binary_crossentropy: 0.0954 - val_loss: 0.5132 - val_accuracy: 0.8606 - val_binary_crossentropy: 0.4426
Epoch 16/20
25000/25000 - 4s - loss: 0.1636 - accuracy: 0.9674 - binary_crossentropy: 0.0922 - val_loss: 0.5192 - val_accuracy: 0.8572 - val_binary_crossentropy: 0.4475
Epoch 17/20
25000/25000 - 5s - loss: 0.1603 - accuracy: 0.9693 - binary_crossentropy: 0.0888 - val_loss: 0.5251 - val_accuracy: 0.8585 - val_binary_crossentropy: 0.4535
Epoch 18/20
25000/25000 - 4s - loss: 0.1572 - accuracy: 0.9712 - binary_crossentropy: 0.0853 - val_loss: 0.5553 - val_accuracy: 0.8542 - val_binary_crossentropy: 0.4834
Epoch 19/20
25000/25000 - 5s - loss: 0.1548 - accuracy: 0.9708 - binary_crossentropy: 0.0829 - val_loss: 0.5416 - val_accuracy: 0.8570 - val_binary_crossentropy: 0.4695
Epoch 20/20
25000/25000 - 4s - loss: 0.1484 - accuracy: 0.9751 - binary_crossentropy: 0.0764 - val_loss: 0.5491 - val_accuracy: 0.8563 - val_binary_crossentropy: 0.4773

l2(0.001) means that every coefficient in the weight matrix of the layer will add 0.001 * weight_coefficient_value**2 to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.

Here's the impact of our L2 regularization penalty:

plot_history([('baseline', baseline_history),
              ('l2', l2_model_history)])

png

As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters.

Add dropout

Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.

In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.

Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:

dpt_model = keras.models.Sequential([
    keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(16, activation='relu'),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(1, activation='sigmoid')
])

dpt_model.compile(optimizer='adam',
                  loss='binary_crossentropy',
                  metrics=['accuracy','binary_crossentropy'])

dpt_model_history = dpt_model.fit(train_data, train_labels,
                                  epochs=20,
                                  batch_size=512,
                                  validation_data=(test_data, test_labels),
                                  verbose=2)
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 - 5s - loss: 0.6126 - accuracy: 0.6638 - binary_crossentropy: 0.6126 - val_loss: 0.4757 - val_accuracy: 0.8579 - val_binary_crossentropy: 0.4757
Epoch 2/20
25000/25000 - 4s - loss: 0.4509 - accuracy: 0.8194 - binary_crossentropy: 0.4509 - val_loss: 0.3419 - val_accuracy: 0.8805 - val_binary_crossentropy: 0.3419
Epoch 3/20
25000/25000 - 4s - loss: 0.3485 - accuracy: 0.8710 - binary_crossentropy: 0.3485 - val_loss: 0.2927 - val_accuracy: 0.8869 - val_binary_crossentropy: 0.2927
Epoch 4/20
25000/25000 - 4s - loss: 0.2841 - accuracy: 0.9000 - binary_crossentropy: 0.2841 - val_loss: 0.2773 - val_accuracy: 0.8880 - val_binary_crossentropy: 0.2773
Epoch 5/20
25000/25000 - 4s - loss: 0.2370 - accuracy: 0.9174 - binary_crossentropy: 0.2370 - val_loss: 0.2779 - val_accuracy: 0.8874 - val_binary_crossentropy: 0.2779
Epoch 6/20
25000/25000 - 4s - loss: 0.2082 - accuracy: 0.9300 - binary_crossentropy: 0.2082 - val_loss: 0.2943 - val_accuracy: 0.8850 - val_binary_crossentropy: 0.2943
Epoch 7/20
25000/25000 - 5s - loss: 0.1824 - accuracy: 0.9391 - binary_crossentropy: 0.1824 - val_loss: 0.3047 - val_accuracy: 0.8839 - val_binary_crossentropy: 0.3047
Epoch 8/20
25000/25000 - 4s - loss: 0.1634 - accuracy: 0.9473 - binary_crossentropy: 0.1634 - val_loss: 0.3346 - val_accuracy: 0.8826 - val_binary_crossentropy: 0.3346
Epoch 9/20
25000/25000 - 5s - loss: 0.1454 - accuracy: 0.9520 - binary_crossentropy: 0.1454 - val_loss: 0.3452 - val_accuracy: 0.8813 - val_binary_crossentropy: 0.3452
Epoch 10/20
25000/25000 - 4s - loss: 0.1318 - accuracy: 0.9548 - binary_crossentropy: 0.1318 - val_loss: 0.3594 - val_accuracy: 0.8806 - val_binary_crossentropy: 0.3594
Epoch 11/20
25000/25000 - 4s - loss: 0.1209 - accuracy: 0.9580 - binary_crossentropy: 0.1209 - val_loss: 0.3654 - val_accuracy: 0.8797 - val_binary_crossentropy: 0.3654
Epoch 12/20
25000/25000 - 4s - loss: 0.1122 - accuracy: 0.9611 - binary_crossentropy: 0.1122 - val_loss: 0.3926 - val_accuracy: 0.8782 - val_binary_crossentropy: 0.3926
Epoch 13/20
25000/25000 - 4s - loss: 0.1009 - accuracy: 0.9633 - binary_crossentropy: 0.1009 - val_loss: 0.4244 - val_accuracy: 0.8773 - val_binary_crossentropy: 0.4244
Epoch 14/20
25000/25000 - 4s - loss: 0.0952 - accuracy: 0.9662 - binary_crossentropy: 0.0952 - val_loss: 0.4538 - val_accuracy: 0.8765 - val_binary_crossentropy: 0.4538
Epoch 15/20
25000/25000 - 5s - loss: 0.0897 - accuracy: 0.9681 - binary_crossentropy: 0.0897 - val_loss: 0.4545 - val_accuracy: 0.8772 - val_binary_crossentropy: 0.4545
Epoch 16/20
25000/25000 - 5s - loss: 0.0845 - accuracy: 0.9689 - binary_crossentropy: 0.0845 - val_loss: 0.4856 - val_accuracy: 0.8776 - val_binary_crossentropy: 0.4856
Epoch 17/20
25000/25000 - 4s - loss: 0.0793 - accuracy: 0.9702 - binary_crossentropy: 0.0793 - val_loss: 0.5024 - val_accuracy: 0.8776 - val_binary_crossentropy: 0.5024
Epoch 18/20
25000/25000 - 4s - loss: 0.0725 - accuracy: 0.9717 - binary_crossentropy: 0.0725 - val_loss: 0.5342 - val_accuracy: 0.8772 - val_binary_crossentropy: 0.5342
Epoch 19/20
25000/25000 - 4s - loss: 0.0711 - accuracy: 0.9722 - binary_crossentropy: 0.0711 - val_loss: 0.5416 - val_accuracy: 0.8763 - val_binary_crossentropy: 0.5416
Epoch 20/20
25000/25000 - 5s - loss: 0.0685 - accuracy: 0.9741 - binary_crossentropy: 0.0685 - val_loss: 0.5653 - val_accuracy: 0.8754 - val_binary_crossentropy: 0.5653
plot_history([('baseline', baseline_history),
              ('dropout', dpt_model_history)])

png

Adding dropout is a clear improvement over the baseline model.

To recap: here are the most common ways to prevent overfitting in neural networks:

  • Get more training data.
  • Reduce the capacity of the network.
  • Add weight regularization.
  • Add dropout.

And two important approaches not covered in this guide are data-augmentation and batch normalization.

#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.