Optimizer that implements the RMSProp algorithm (Tielemans et al.

Inherits From: Optimizer

Used in the notebooks

Used in the tutorials



Coursera slide 29: Hinton, 2012 (pdf)

learning_rate A Tensor or a floating point value. The learning rate.
decay Discounting factor for the history/coming gradient
momentum A scalar tensor.
epsilon Small value to avoid zero denominator.
use_locking If True use locks for update operation.
centered If True, gradients are normalized by the estimated variance of the gradient; if False, by the uncentered second moment. Setting this to True may help with training, but is slightly more expensive in terms of computation and memory. Defaults to False.
name Optional name prefix for the operations created when applying gradients. Defaults to "RMSProp".



View source

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

grads_and_vars List of (gradi