Update 'var' and 'accum' according to FOBOS with Adagrad learning rate.
tf.raw_ops.ResourceApplyProximalAdagrad(
var, accum, lr, l1, l2, grad, use_locking=False, name=None
)
accum += grad * grad
prox_v = var - lr * grad * (1 / sqrt(accum))
var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}
Args |
var
|
A Tensor of type resource . Should be from a Variable().
|
accum
|
A Tensor of type resource . Should be from a Variable().
|
lr
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , complex64 , int64 , qint8 , quint8 , qint32 , bfloat16 , uint16 , complex128 , half , uint32 , uint64 .
Scaling factor. Must be a scalar.
|
l1
|
A Tensor . Must have the same type as lr .
L1 regularization. Must be a scalar.
|
l2
|
A Tensor . Must have the same type as lr .
L2 regularization. Must be a scalar.
|
grad
|
A Tensor . Must have the same type as lr . The gradient.
|
use_locking
|
An optional bool . Defaults to False .
If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
|
name
|
A name for the operation (optional).
|
Returns |
The created Operation.
|