tf.keras.callbacks.EarlyStopping
Stay organized with collections
Save and categorize content based on your preferences.
Stop training when a monitored metric has stopped improving.
Inherits From: Callback
tf.keras.callbacks.EarlyStopping(
monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto',
baseline=None, restore_best_weights=False
)
Assuming the goal of a training is to minimize the loss. With this, the
metric to be monitored would be 'loss', and mode would be 'min'. A
model.fit()
training loop will check at end of every epoch whether
the loss is no longer decreasing, considering the min_delta
and
patience
if applicable. Once it's found no longer decreasing,
model.stop_training
is marked True and the training terminates.
The quantity to be monitored needs to be available in logs
dict.
To make it so, pass the loss or metrics at model.compile()
.
Example:
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# This callback will stop the training when there is no improvement in
# the validation loss for three consecutive epochs.
model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
model.compile(tf.keras.optimizers.SGD(), loss='mse')
history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
epochs=10, batch_size=1, callbacks=[callback],
verbose=0)
len(history.history['loss']) # Only 4 epochs are run.
4
Arguments |
monitor
|
Quantity to be monitored.
|
min_delta
|
Minimum change in the monitored quantity
to qualify as an improvement, i.e. an absolute
change of less than min_delta, will count as no
improvement.
|
patience
|
Number of epochs with no improvement
after which training will be stopped.
|
verbose
|
verbosity mode.
|
mode
|
One of {"auto", "min", "max"} . In min mode,
training will stop when the quantity
monitored has stopped decreasing; in max
mode it will stop when the quantity
monitored has stopped increasing; in auto
mode, the direction is automatically inferred
from the name of the monitored quantity.
|
baseline
|
Baseline value for the monitored quantity.
Training will stop if the model doesn't show improvement over the
baseline.
|
restore_best_weights
|
Whether to restore model weights from
the epoch with the best value of the monitored quantity.
If False, the model weights obtained at the last step of
training are used.
|
Methods
get_monitor_value
View source
get_monitor_value(
logs
)
set_model
View source
set_model(
model
)
set_params
View source
set_params(
params
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.callbacks.EarlyStopping\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/callbacks/EarlyStopping) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/callbacks.py#L1366-L1498) |\n\nStop training when a monitored metric has stopped improving.\n\nInherits From: [`Callback`](../../../tf/keras/callbacks/Callback)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.callbacks.EarlyStopping`](/api_docs/python/tf/keras/callbacks/EarlyStopping)\n\n\u003cbr /\u003e\n\n tf.keras.callbacks.EarlyStopping(\n monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto',\n baseline=None, restore_best_weights=False\n )\n\nAssuming the goal of a training is to minimize the loss. With this, the\nmetric to be monitored would be 'loss', and mode would be 'min'. A\n`model.fit()` training loop will check at end of every epoch whether\nthe loss is no longer decreasing, considering the `min_delta` and\n`patience` if applicable. Once it's found no longer decreasing,\n`model.stop_training` is marked True and the training terminates.\n\nThe quantity to be monitored needs to be available in `logs` dict.\nTo make it so, pass the loss or metrics at `model.compile()`.\n\n#### Example:\n\n callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)\n # This callback will stop the training when there is no improvement in\n # the validation loss for three consecutive epochs.\n model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n model.compile(tf.keras.optimizers.SGD(), loss='mse')\n history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n epochs=10, batch_size=1, callbacks=[callback],\n verbose=0)\n len(history.history['loss']) # Only 4 epochs are run.\n 4\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `monitor` | Quantity to be monitored. |\n| `min_delta` | Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. |\n| `patience` | Number of epochs with no improvement after which training will be stopped. |\n| `verbose` | verbosity mode. |\n| `mode` | One of `{\"auto\", \"min\", \"max\"}`. In `min` mode, training will stop when the quantity monitored has stopped decreasing; in `max` mode it will stop when the quantity monitored has stopped increasing; in `auto` mode, the direction is automatically inferred from the name of the monitored quantity. |\n| `baseline` | Baseline value for the monitored quantity. Training will stop if the model doesn't show improvement over the baseline. |\n| `restore_best_weights` | Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_monitor_value`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/callbacks.py#L1491-L1498) \n\n get_monitor_value(\n logs\n )\n\n### `set_model`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/callbacks.py#L548-L549) \n\n set_model(\n model\n )\n\n### `set_params`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/callbacks.py#L545-L546) \n\n set_params(\n params\n )"]]