tf.keras.callbacks.EarlyStopping
Stay organized with collections
Save and categorize content based on your preferences.
Stop training when a monitored metric has stopped improving.
Inherits From: Callback
tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False,
start_from_epoch=0
)
Assuming the goal of a training is to minimize the loss. With this, the
metric to be monitored would be 'loss'
, and mode would be 'min'
. A
model.fit()
training loop will check at end of every epoch whether
the loss is no longer decreasing, considering the min_delta
and
patience
if applicable. Once it's found no longer decreasing,
model.stop_training
is marked True and the training terminates.
The quantity to be monitored needs to be available in logs
dict.
To make it so, pass the loss or metrics at model.compile()
.
Args |
monitor
|
Quantity to be monitored.
|
min_delta
|
Minimum change in the monitored quantity
to qualify as an improvement, i.e. an absolute
change of less than min_delta, will count as no
improvement.
|
patience
|
Number of epochs with no improvement
after which training will be stopped.
|
verbose
|
Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1
displays messages when the callback takes an action.
|
mode
|
One of {"auto", "min", "max"} . In min mode,
training will stop when the quantity
monitored has stopped decreasing; in "max"
mode it will stop when the quantity
monitored has stopped increasing; in "auto"
mode, the direction is automatically inferred
from the name of the monitored quantity.
|
baseline
|
Baseline value for the monitored quantity.
Training will stop if the model doesn't show improvement over the
baseline.
|
restore_best_weights
|
Whether to restore model weights from
the epoch with the best value of the monitored quantity.
If False, the model weights obtained at the last step of
training are used. An epoch will be restored regardless
of the performance relative to the baseline . If no epoch
improves on baseline , training will run for patience
epochs and restore weights from the best epoch in that set.
|
start_from_epoch
|
Number of epochs to wait before starting
to monitor improvement. This allows for a warm-up period in which
no improvement is expected and thus training will not be stopped.
|
Example:
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# This callback will stop the training when there is no improvement in
# the loss for three consecutive epochs.
model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
model.compile(tf.keras.optimizers.SGD(), loss='mse')
history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
epochs=10, batch_size=1, callbacks=[callback],
verbose=0)
len(history.history['loss']) # Only 4 epochs are run.
4
Methods
get_monitor_value
View source
get_monitor_value(
logs
)
set_model
View source
set_model(
model
)
set_params
View source
set_params(
params
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.callbacks.EarlyStopping\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.14.0/keras/callbacks.py#L1973-L2146) |\n\nStop training when a monitored metric has stopped improving.\n\nInherits From: [`Callback`](../../../tf/keras/callbacks/Callback) \n\n tf.keras.callbacks.EarlyStopping(\n monitor='val_loss',\n min_delta=0,\n patience=0,\n verbose=0,\n mode='auto',\n baseline=None,\n restore_best_weights=False,\n start_from_epoch=0\n )\n\nAssuming the goal of a training is to minimize the loss. With this, the\nmetric to be monitored would be `'loss'`, and mode would be `'min'`. A\n`model.fit()` training loop will check at end of every epoch whether\nthe loss is no longer decreasing, considering the `min_delta` and\n`patience` if applicable. Once it's found no longer decreasing,\n`model.stop_training` is marked True and the training terminates.\n\nThe quantity to be monitored needs to be available in `logs` dict.\nTo make it so, pass the loss or metrics at `model.compile()`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `monitor` | Quantity to be monitored. |\n| `min_delta` | Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. |\n| `patience` | Number of epochs with no improvement after which training will be stopped. |\n| `verbose` | Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action. |\n| `mode` | One of `{\"auto\", \"min\", \"max\"}`. In `min` mode, training will stop when the quantity monitored has stopped decreasing; in `\"max\"` mode it will stop when the quantity monitored has stopped increasing; in `\"auto\"` mode, the direction is automatically inferred from the name of the monitored quantity. |\n| `baseline` | Baseline value for the monitored quantity. Training will stop if the model doesn't show improvement over the baseline. |\n| `restore_best_weights` | Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the `baseline`. If no epoch improves on `baseline`, training will run for `patience` epochs and restore weights from the best epoch in that set. |\n| `start_from_epoch` | Number of epochs to wait before starting to monitor improvement. This allows for a warm-up period in which no improvement is expected and thus training will not be stopped. |\n\n\u003cbr /\u003e\n\n#### Example:\n\n callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)\n # This callback will stop the training when there is no improvement in\n # the loss for three consecutive epochs.\n model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])\n model.compile(tf.keras.optimizers.SGD(), loss='mse')\n history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\n epochs=10, batch_size=1, callbacks=[callback],\n verbose=0)\n len(history.history['loss']) # Only 4 epochs are run.\n 4\n\nMethods\n-------\n\n### `get_monitor_value`\n\n[View source](https://github.com/keras-team/keras/tree/v2.14.0/keras/callbacks.py#L2133-L2143) \n\n get_monitor_value(\n logs\n )\n\n### `set_model`\n\n[View source](https://github.com/keras-team/keras/tree/v2.14.0/keras/callbacks.py#L694-L695) \n\n set_model(\n model\n )\n\n### `set_params`\n\n[View source](https://github.com/keras-team/keras/tree/v2.14.0/keras/callbacks.py#L691-L692) \n\n set_params(\n params\n )"]]