{ }
Sparse update entries in 'var' and 'accum' according to FOBOS algorithm.
tf.raw_ops.SparseApplyProximalAdagrad(
var, accum, lr, l1, l2, grad, indices, use_locking=False, name=None
)
That is for rows we have grad for, we update var and accum as follows:
\[accum += grad * grad\]
\[prox_v = var\]
\[prox_v -= lr * grad * (1 / sqrt(accum))\]
\[var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}\]
Returns | |
---|---|
A mutable Tensor . Has the same type as var .
|