View source on GitHub |
Returns a tff.learning.optimizers.Optimizer
for Adam.
tff.learning.optimizers.build_adam(
learning_rate: optimizer.Float,
beta_1: optimizer.Float = 0.9,
beta_2: optimizer.Float = 0.999,
epsilon: optimizer.Float = 1e-07
) -> tff.learning.optimizers.Optimizer
The Adam optimizer is based on Adam: A Method for Stochastic Optimization
The update rule given learning rate lr
, epsilon eps
, accumulator acc
,
preconditioner s
, iteration t
, weights w
and gradients g
is:
acc = beta_1 * acc + (1 - beta_1) * g
s = beta_2 * s + (1 - beta_2) * g**2
normalized_lr = lr * sqrt(1 - beta_2**t) / (1 - beta_1**t)
w = w - normalized_lr * acc / (sqrt(s) + eps)