Ftrl

public class Ftrl

Optimizer that implements the FTRL algorithm.

This version has support for both online L2 (the L2 penalty given in the paper below) and shrinkage-type L2 (which is the addition of an L2 penalty to the loss function).

Constants

Inherited Constants

org.tensorflow.framework.optimizers.Optimizer
String VARIABLE_V2

Public Constructors

Ftrl(Graph graph)
Creates a Ftrl Optimizer
Ftrl(Graph graph, String name)
Creates a Ftrl Optimizer
Ftrl(Graph graph, float learningRate)
Creates a Ftrl Optimizer
Ftrl(Graph graph, String name, float learningRate)
Creates a Ftrl Optimizer
Ftrl(Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)
Creates a Ftrl Optimizer
Ftrl(Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)
Creates a Ftrl Optimizer

Public Methods

String
getOptimizerName()
Get the Name of the optimizer.

Inherited Methods

org.tensorflow.framework.optimizers.Optimizer
Op
applyGradients(List<GradAndVar<? extends TType>> gradsAndVars, String name)
Applies gradients to variables
<T extends TType> List<GradAndVar<?>>
computeGradients(Operand<?> loss)
Computes the gradients based on a loss operand.
static String
createName(Output<? extends TType> variable, String slotName)
Creates a name by combining a variable name and a slot name
abstract String
getOptimizerName()
Get the Name of the optimizer.
<T extends TType> Optional<Variable<T>>
getSlot(Output<T> var, String slotName)
Gets the slot associated with the specified variable and slot name.
final Ops
getTF()
Gets the Optimizer's Ops instance
Op
minimize(Operand<?> loss)
Minimizes the loss by updating the variables
Op
minimize(Operand<?> loss, String name)
Minimizes the loss by updating the variables
boolean
equals(Object arg0)
final Class<?>
getClass()
int
hashCode()
final void
notify()
final void
notifyAll()
String
toString()
final void
wait(long arg0, int arg1)
final void
wait(long arg0)
final void
wait()

Constants

public static final String ACCUMULATOR

Constant Value: "gradient_accumulator"

public static final float INITIAL_ACCUMULATOR_VALUE_DEFAULT

Constant Value: 0.1

public static final float L1STRENGTH_DEFAULT

Constant Value: 0.0

public static final float L2STRENGTH_DEFAULT

Constant Value: 0.0

public static final float L2_SHRINKAGE_REGULARIZATION_STRENGTH_DEFAULT

Constant Value: 0.0

public static final float LEARNING_RATE_DEFAULT

Constant Value: 0.001

public static final float LEARNING_RATE_POWER_DEFAULT

Constant Value: -0.5

public static final String LINEAR_ACCUMULATOR

Constant Value: "linear_accumulator"

Public Constructors

public Ftrl (Graph graph)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph

public Ftrl (Graph graph, String name)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer

public Ftrl (Graph graph, float learningRate)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
learningRate the learning rate

public Ftrl (Graph graph, String name, float learningRate)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer
learningRate the learning rate

public Ftrl (Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
learningRate the learning rate
learningRatePower Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
initialAccumulatorValue The starting value for accumulators. Only zero or positive values are allowed.
l1Strength the L1 Regularization strength, must be greater than or equal to zero.
l2Strength the L2 Regularization strength, must be greater than or equal to zero.
l2ShrinkageRegularizationStrength This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
Throws
IllegalArgumentException if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.

public Ftrl (Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer
learningRate the learning rate
learningRatePower Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
initialAccumulatorValue The starting value for accumulators. Only zero or positive values are allowed.
l1Strength the L1 Regularization strength, must be greater than or equal to zero.
l2Strength the L2 Regularization strength, must be greater than or equal to zero.
l2ShrinkageRegularizationStrength This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
Throws
IllegalArgumentException if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.

Public Methods

public String getOptimizerName ()

Get the Name of the optimizer.

Returns
  • The optimizer name.