Update '*var' according to the RMSProp algorithm.
tf.raw_ops.ResourceSparseApplyRMSProp(
    var,
    ms,
    mom,
    lr,
    rho,
    momentum,
    epsilon,
    grad,
    indices,
    use_locking=False,
    name=None
)
Note that in dense implementation of this algorithm, ms and mom will
update even if the grad is zero, but in this sparse implementation, ms
and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient ** 2
Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms <- rho * ms{t-1} + (1-rho) * grad * grad
mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon)
var <- var - mom
| Args | 
|---|
| var | A Tensorof typeresource. Should be from a Variable(). | 
| ms | A Tensorof typeresource. Should be from a Variable(). | 
| mom | A Tensorof typeresource. Should be from a Variable(). | 
| lr | A Tensor. Must be one of the following types:float32,float64,int32,uint8,int16,int8,complex64,int64,qint8,quint8,qint32,bfloat16,uint16,complex128,half,uint32,uint64.
Scaling factor. Must be a scalar. | 
| rho | A Tensor. Must have the same type aslr.
Decay rate. Must be a scalar. | 
| momentum | A Tensor. Must have the same type aslr. | 
| epsilon | A Tensor. Must have the same type aslr.
Ridge term. Must be a scalar. | 
| grad | A Tensor. Must have the same type aslr. The gradient. | 
| indices | A Tensor. Must be one of the following types:int32,int64.
A vector of indices into the first dimension of var, ms and mom. | 
| use_locking | An optional bool. Defaults toFalse.
IfTrue, updating of the var, ms, and mom tensors is protected
by a lock; otherwise the behavior is undefined, but may exhibit less
contention. | 
| name | A name for the operation (optional). | 
| Returns | 
|---|
| The created Operation. |