tensorflow:: ops:: ApplyAdagradDA
#include <training_ops.h>
Update '*var' according to the proximal adagrad scheme.
Summary
Args:
- scope: A Scope object
 - var: Should be from a Variable().
 - gradient_accumulator: Should be from a Variable().
 - gradient_squared_accumulator: Should be from a Variable().
 - grad: The gradient.
 - lr: Scaling factor. Must be a scalar.
 - l1: L1 regularization. Must be a scalar.
 - l2: L2 regularization. Must be a scalar.
 - global_step: Training step number. Must be a scalar.
 
Optional attributes (see Attrs):
- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
 
Returns:
Output: Same as "var".
Constructors and Destructors | 
|
|---|---|
ApplyAdagradDA(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input gradient_accumulator, ::tensorflow::Input gradient_squared_accumulator, ::tensorflow::Input grad, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input global_step)
 | 
|
ApplyAdagradDA(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input gradient_accumulator, ::tensorflow::Input gradient_squared_accumulator, ::tensorflow::Input grad, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input global_step, const ApplyAdagradDA::Attrs & attrs)
 | 
Public attributes | 
|
|---|---|
operation
 | 
|
out
 | 
|
Public functions | 
|
|---|---|
node() const 
 | 
::tensorflow::Node *
 | 
operator::tensorflow::Input() const 
 | 
 | 
operator::tensorflow::Output() const 
 | 
 | 
Public static functions | 
|
|---|---|
UseLocking(bool x)
 | 
|
Structs | 
|
|---|---|
| 
tensorflow:: | 
 Optional attribute setters for ApplyAdagradDA.  | 
Public attributes
operation
Operation operation
out
::tensorflow::Output out
Public functions
ApplyAdagradDA
ApplyAdagradDA( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input gradient_accumulator, ::tensorflow::Input gradient_squared_accumulator, ::tensorflow::Input grad, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input global_step )
ApplyAdagradDA
ApplyAdagradDA( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input gradient_accumulator, ::tensorflow::Input gradient_squared_accumulator, ::tensorflow::Input grad, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input global_step, const ApplyAdagradDA::Attrs & attrs )
node
::tensorflow::Node * node() const
operator::tensorflow::Input
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
UseLocking
Attrs UseLocking( bool x )