tf.compat.v1.train.ProximalAdagradOptimizer

Optimizer that implements the Proximal Adagrad algorithm.

Inherits From: Optimizer

References:

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization: Duchi et al., 2011 (pdf) Efficient Learning using Forward-Backward Splitting: Duchi et al., 2009 (pdf)

learning_rate A Tensor or a floating point value. The learning rate.
initial_accumulator_value A floating point value. Starting value for the accumulators, must be positive.
l1_regularization_strength A float value, must be greater than or equal to zero.
l2_regularization_strength A float value, must be greater than or equal to zero.
use_locking If True use locks for update operations.
name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad".

ValueError If the initial_accumulator_value is invalid.

Methods

apply_gradients

View source

Apply gradients to variables.

This is the second part of minimize(). It returns an