tf.raw_ops.ResourceApplyCenteredRMSProp

Update '*var' according to the centered RMSProp algorithm.

The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.

Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient

Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)

mg <- rho * mg{t-1} + (1-rho) * grad ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom

var A Tensor of type resource. Should be from a Variable().
mg A Tensor of type resource. Should be from a Variable().
ms A Tensor of type resource. Should be from a Variable().
mom A Tensor of type resource. Should be from a Variable().
lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, qint16, quint16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as lr. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as lr. Momentum Scale. Must be a scalar.
epsilon A Tensor. Must have the same type as lr. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as lr. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).

The created Operation.