tf.raw_ops.SparseApplyProximalGradientDescent

Sparse update '*var' as FOBOS algorithm with fixed learning rate.

Compat aliases for migration

See Migration guide for more details.

tf.compat.v1.raw_ops.SparseApplyProximalGradientDescent

That is for rows we have grad for, we update var as follows:

proxv=varalphagrad

var=sign(proxv)/(1+alphal2)max|proxv|alphal1,0

var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, qint16, quint16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
alpha A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar.
grad A Tensor. Must have the same type as var. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).

A mutable Tensor. Has the same type as var.