View source on GitHub
|
Power learning rate decay with offset.
tfm.optimization.PowerDecayWithOffset(
initial_learning_rate: float,
power: float = 1.0,
offset: int = 0,
pre_offset_learning_rate: float = 1000000.0,
name: str = 'PowerDecayWithOffset'
)
Learning rate equals to pre_offset_learning_rate if step < offset.
Otherwise, learning rate equals to lr * (step - offset)^power.
Methods
from_config
@classmethodfrom_config( config )
Instantiates a LearningRateSchedule from its config.
| Args | |
|---|---|
config
|
Output of get_config().
|
| Returns | |
|---|---|
A LearningRateSchedule instance.
|
get_config
get_config()
Get the configuration of the learning rate schedule.
__call__
__call__(
step
)
Call self as a function.
View source on GitHub