![]() |
Returns a tff.learning.optimizers.Optimizer
for Yogi.
tff.learning.optimizers.build_yogi(
learning_rate: float,
beta_1: float = 0.9,
beta_2: float = 0.999,
epsilon: float = 0.001,
initial_preconditioner_value=1e-06
) -> tff.learning.optimizers.Optimizer
The Yogi optimizer is based on Adaptive methods for nonconvex optimization
The update rule given learning rate lr
, epsilon eps
, accumulator acc
,
preconditioner s
, iteration t
, weights w
and gradients g
is:
acc = beta_1 * acc + (1 - beta_1) * g
s = s + (1 - beta_2) * sign(g - s) * (g ** 2)
normalized_lr = lr * sqrt(1 - beta_2**t) / (1 - beta_1**t)
w = w - normalized_lr * acc / (sqrt(s) + eps)
Implementation of Yogi is based on additive updates, as opposed to multiplicative updates (as in Adam). Experiments show better performance across NLP and Vision tasks both in centralized and federated settings.
Typically use 10x the learning rate used for Adam.
Args | |
---|---|
learning_rate
|
A positive float for learning rate.
|
beta_1
|
A float between 0.0 and 1.0 for the decay used to track the
previous gradients.
|
beta_2
|
A float between 0.0 and 1.0 for the decay used to track the
magnitude (second moment) of previous gradients.
|
epsilon
|
A constant trading off adaptivity and noise.. |
initial_preconditioner_value
|
The starting value for preconditioner. Only positive values are allowed. |