tf.compat.v1.train.ProximalGradientDescentOptimizer

Optimizer that implements the proximal gradient descent algorithm.

Inherits From: Optimizer

References:

Efficient Learning using Forward-Backward Splitting: Duchi et al., 2009 (pdf)

learning_rate A Tensor or a floating point value. The learning rate to use.
l1_regularization_strength A float value, must be greater than or equal to zero.
l2_regularization_strength A float value, must be greater than or equal to zero.
use_locking If True use locks for update operations.
name Optional name prefix for the operations created when applying gradients. Defaults to "GradientDescent".

Methods

apply_gradients

View source

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

Args
grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients().
global_step Optional Variable to increment by one after the variables have been updated.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.