tf.keras.callbacks.ReduceLROnPlateau
Reduce learning rate when a metric has stopped improving.
Inherits From: Callback
tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss',
factor=0.1,
patience=10,
verbose=0,
mode='auto',
min_delta=0.0001,
cooldown=0,
min_lr=0,
**kwargs
)
Models often benefit from reducing the learning rate by a factor
of 2-10 once learning stagnates. This callback monitors a
quantity and if no improvement is seen for a 'patience' number
of epochs, the learning rate is reduced.
Example:
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
model.fit(X_train, Y_train, callbacks=[reduce_lr])
Args |
monitor
|
quantity to be monitored.
|
factor
|
factor by which the learning rate will be reduced.
new_lr = lr * factor .
|
patience
|
number of epochs with no improvement after which learning rate
will be reduced.
|
verbose
|
int. 0: quiet, 1: update messages.
|
mode
|
one of {'auto', 'min', 'max'} . In 'min' mode,
the learning rate will be reduced when the
quantity monitored has stopped decreasing; in 'max' mode it will be
reduced when the quantity monitored has stopped increasing; in
'auto' mode, the direction is automatically inferred from the name
of the monitored quantity.
|
min_delta
|
threshold for measuring the new optimum, to only focus on
significant changes.
|
cooldown
|
number of epochs to wait before resuming normal operation
after lr has been reduced.
|
min_lr
|
lower bound on the learning rate.
|
Methods
in_cooldown
View source
in_cooldown()
set_model
View source
set_model(
model
)
set_params
View source
set_params(
params
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[]]