Update relevant entries in '*var' according to the Ftrl-proximal scheme.
tf.raw_ops.ResourceSparseApplyFtrl(
    var, accum, linear, grad, indices, lr, l1, l2, lr_power, use_locking=False,
    multiply_linear_by_lr=False, name=None
)
That is for rows we have grad for, we update var, accum and linear as follows: accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
| Args | |
|---|---|
| var | A Tensorof typeresource. Should be from a Variable(). | 
| accum | A Tensorof typeresource. Should be from a Variable(). | 
| linear | A Tensorof typeresource. Should be from a Variable(). | 
| grad | A Tensor. Must be one of the following types:float32,float64,int32,uint8,int16,int8,complex64,int64,qint8,quint8,qint32,bfloat16,uint16,complex128,half,uint32,uint64.
The gradient. | 
| indices | A Tensor. Must be one of the following types:int32,int64.
A vector of indices into the first dimension of var and accum. | 
| lr | A Tensor. Must have the same type asgrad.
Scaling factor. Must be a scalar. | 
| l1 | A Tensor. Must have the same type asgrad.
L1 regularization. Must be a scalar. | 
| l2 | A Tensor. Must have the same type asgrad.
L2 regularization. Must be a scalar. | 
| lr_power | A Tensor. Must have the same type asgrad.
Scaling factor. Must be a scalar. | 
| use_locking | An optional bool. Defaults toFalse.
IfTrue, updating of the var and accum tensors will be protected
by a lock; otherwise the behavior is undefined, but may exhibit less
contention. | 
| multiply_linear_by_lr | An optional bool. Defaults toFalse. | 
| name | A name for the operation (optional). | 
| Returns | |
|---|---|
| The created Operation. |