RMSProp

public class RMSProp

Optimizer that implements the RMSProp algorithm.

The gist of RMSprop is to:

  • Maintain a moving (discounted) average of the square of gradients
  • Divide the gradient by the root of this average

This implementation of RMSprop uses plain momentum, not Nesterov momentum.

The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance.

Constants

Inherited Constants

org.tensorflow.framework.optimizers.Optimizer
String VARIABLE_V2

Public Constructors

RMSProp(Graph graph)
Creates an RMSPRrop Optimizer
RMSProp(Graph graph, float learningRate)
Creates an RMSPRrop Optimizer
RMSProp(Graph graph, float learningRate, float decay, float momentum, float epsilon, boolean centered)
Creates an RMSPRrop Optimizer
RMSProp(Graph graph, String name, float learningRate)
Creates an RMSPRrop Optimizer
RMSProp(Graph graph, String name, float learningRate, float decay, float momentum, float epsilon, boolean centered)
Creates an RMSPRrop Optimizer

Public Methods

String
getOptimizerName()
Get the Name of the optimizer.
String

Inherited Methods

org.tensorflow.framework.optimizers.Optimizer
Op
applyGradients(List<GradAndVar<? extends TType>> gradsAndVars, String name)
Applies gradients to variables
<T extends TType> List<GradAndVar<?>>
computeGradients(Operand<?> loss)
Computes the gradients based on a loss operand.
static String
createName(Output<? extends TType> variable, String slotName)
Creates a name by combining a variable name and a slot name
abstract String
getOptimizerName()
Get the Name of the optimizer.
<T extends TType> Optional<Variable<T>>
getSlot(Output<T> var, String slotName)
Gets the slot associated with the specified variable and slot name.
final Ops
getTF()
Gets the Optimizer's Ops instance
Op
minimize(Operand<?> loss)
Minimizes the loss by updating the variables
Op
minimize(Operand<?> loss, String name)
Minimizes the loss by updating the variables
boolean
equals(Object arg0)
final Class<?>
getClass()
int
hashCode()
final void
notify()
final void
notifyAll()
String
toString()
final void
wait(long arg0, int arg1)
final void
wait(long arg0)
final void
wait()

Constants

public static final boolean CENTERED_DEFAULT

Constant Value: false

public static final float DECAY_DEFAULT

Constant Value: 0.9

public static final float EPSILON_DEFAULT

Constant Value: 1.0E-10

public static final float LEARNING_RATE_DEFAULT

Constant Value: 0.001

public static final String MG

Constant Value: "mg"

public static final String MOMENTUM

Constant Value: "momentum"

public static final float MOMENTUM_DEFAULT

Constant Value: 0.0

public static final String RMS

Constant Value: "rms"

Public Constructors

public RMSProp (Graph graph)

Creates an RMSPRrop Optimizer

Parameters
graph the TensorFlow Graph

public RMSProp (Graph graph, float learningRate)

Creates an RMSPRrop Optimizer

Parameters
graph the TensorFlow Graph
learningRate the learning rate

public RMSProp (Graph graph, float learningRate, float decay, float momentum, float epsilon, boolean centered)

Creates an RMSPRrop Optimizer

Parameters
graph the TensorFlow Graph
learningRate the learning rate
decay Discounting factor for the history/coming gradient. Defaults to 0.9.
momentum the acceleration factor, default is 0.
epsilon A small constant for numerical stability
centered If true, gradients are normalized by the estimated variance of the gradient; if false, by the uncentered second moment. Setting this to true may help with training, but is slightly more expensive in terms of computation and memory. Defaults to false.

public RMSProp (Graph graph, String name, float learningRate)

Creates an RMSPRrop Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer. Defaults to "RMSProp".
learningRate the learning rate

public RMSProp (Graph graph, String name, float learningRate, float decay, float momentum, float epsilon, boolean centered)

Creates an RMSPRrop Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer. Defaults to "RMSProp".
learningRate the learning rate
decay Discounting factor for the history/coming gradient. Defaults to 0.9.
momentum The acceleration factor, default is 0.
epsilon A small constant for numerical stability
centered If true, gradients are normalized by the estimated variance of the gradient; if false, by the uncentered second moment. Setting this to true may help with training, but is slightly more expensive in terms of computation and memory. Defaults to false.

Public Methods

public String getOptimizerName ()

Get the Name of the optimizer.

Returns
  • The optimizer name.

public String toString ()