ResourceSparseApplyProximalAdagrad

public final class ResourceSparseApplyProximalAdagrad

Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.

That is for rows we have grad for, we update var and accum as follows: accum += grad grad prox_v = var prox_v -= lr grad (1 / sqrt(accum)) var = sign(prox_v)/(1+lrl2) max{|prox_v|-lrl1,0}

Nested Classes

Constants

String OP_NAME The name of this op, as known by TensorFlow core engine

Public Methods

static <T extends TType> ResourceSparseApplyProximalAdagrad
create(Scope scope, Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> grad, Operand<? extends TNumber> indices, Options... options)
Factory method to create a class wrapping a new ResourceSparseApplyProximalAdagrad operation.
static ResourceSparseApplyProximalAdagrad.Options
useLocking(Boolean useLocking)

Inherited Methods

org.tensorflow.op.RawOp
final boolean
equals(Object obj)
final int
Operation
op()
Return this unit of computation as a single Operation.
final String
boolean
equals(Object arg0)
final Class<?>
getClass()
int
hashCode()
final void
notify()
final void
notifyAll()
String
toString()
final void
wait(long arg0, int arg1)
final void
wait(long arg0)
final void
wait()
org.tensorflow.op.Op
abstract ExecutionEnvironment
env()
Return the execution environment this op was created in.
abstract Operation
op()
Return this unit of computation as a single Operation.

Constants

public static final String OP_NAME

The name of this op, as known by TensorFlow core engine

Constant Value: "ResourceSparseApplyProximalAdagrad"

Public Methods

public static ResourceSparseApplyProximalAdagrad create (Scope scope, Operand<?> var, Operand<?> accum, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<T> grad, Operand<? extends TNumber> indices, Options... options)

Factory method to create a class wrapping a new ResourceSparseApplyProximalAdagrad operation.

Parameters
scope current scope
var Should be from a Variable().
accum Should be from a Variable().
lr Learning rate. Must be a scalar.
l1 L1 regularization. Must be a scalar.
l2 L2 regularization. Must be a scalar.
grad The gradient.
indices A vector of indices into the first dimension of var and accum.
options carries optional attributes values
Returns
  • a new instance of ResourceSparseApplyProximalAdagrad

public static ResourceSparseApplyProximalAdagrad.Options useLocking (Boolean useLocking)

Parameters
useLocking If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.