Update relevant entries in '*var' according to the Ftrl-proximal scheme.
tf.raw_ops.SparseApplyFtrl(
    var,
    accum,
    linear,
    grad,
    indices,
    lr,
    l1,
    l2,
    lr_power,
    use_locking=False,
    multiply_linear_by_lr=False,
    name=None
)
That is for rows we have grad for, we update var, accum and linear as follows:
\[accum_new = accum + grad * grad\]
\[linear += grad + (accum_{new}^{-lr_{power} } - accum^{-lr_{power} } / lr * var\]
\[quadratic = 1.0 / (accum_{new}^{lr_{power} } * lr) + 2 * l2\]
\[var = (sign(linear) * l1 - linear) / quadratic\ if\ |linear| > l1\ else\ 0.0\]
\[accum = accum_{new}\]
| Args | 
|---|
| var | A mutable Tensor. Must be one of the following types:float32,float64,int32,uint8,int16,int8,complex64,int64,qint8,quint8,qint32,bfloat16,qint16,quint16,uint16,complex128,half,uint32,uint64.
Should be from a Variable(). | 
| accum | A mutable Tensor. Must have the same type asvar.
Should be from a Variable(). | 
| linear | A mutable Tensor. Must have the same type asvar.
Should be from a Variable(). | 
| grad | A Tensor. Must have the same type asvar. The gradient. | 
| indices | A Tensor. Must be one of the following types:int32,int64.
A vector of indices into the first dimension of var and accum. | 
| lr | A Tensor. Must have the same type asvar.
Scaling factor. Must be a scalar. | 
| l1 | A Tensor. Must have the same type asvar.
L1 regularization. Must be a scalar. | 
| l2 | A Tensor. Must have the same type asvar.
L2 regularization. Must be a scalar. | 
| lr_power | A Tensor. Must have the same type asvar.
Scaling factor. Must be a scalar. | 
| use_locking | An optional bool. Defaults toFalse.
IfTrue, updating of the var and accum tensors will be protected
by a lock; otherwise the behavior is undefined, but may exhibit less
contention. | 
| multiply_linear_by_lr | An optional bool. Defaults toFalse. | 
| name | A name for the operation (optional). | 
| Returns | 
|---|
| A mutable Tensor. Has the same type asvar. |