Stay organized with collections
Save and categorize content based on your preferences.
tensorflow::
ops::
ApplyAdagradDA
#include <training_ops.h>
Update '*var' according to the proximal adagrad scheme.
Summary
Args:
-
scope: A
Scope
object
-
var: Should be from a Variable().
-
gradient_accumulator: Should be from a Variable().
-
gradient_squared_accumulator: Should be from a Variable().
-
grad: The gradient.
-
lr: Scaling factor. Must be a scalar.
-
l1: L1 regularization. Must be a scalar.
-
l2: L2 regularization. Must be a scalar.
-
global_step: Training step number. Must be a scalar.
Optional attributes (see
Attrs
):
-
use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
Constructors and Destructors
|
ApplyAdagradDA
(const ::
tensorflow::Scope
& scope, ::
tensorflow::Input
var, ::
tensorflow::Input
gradient_accumulator, ::
tensorflow::Input
gradient_squared_accumulator, ::
tensorflow::Input
grad, ::
tensorflow::Input
lr, ::
tensorflow::Input
l1, ::
tensorflow::Input
l2, ::
tensorflow::Input
global_step)
|
ApplyAdagradDA
(const ::
tensorflow::Scope
& scope, ::
tensorflow::Input
var, ::
tensorflow::Input
gradient_accumulator, ::
tensorflow::Input
gradient_squared_accumulator, ::
tensorflow::Input
grad, ::
tensorflow::Input
lr, ::
tensorflow::Input
l1, ::
tensorflow::Input
l2, ::
tensorflow::Input
global_step, const
ApplyAdagradDA::Attrs
& attrs)
|
Public attributes
Public functions
node
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
UseLocking
Attrs UseLocking(
bool x
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-05-14 UTC.
[null,null,["Last updated 2021-05-14 UTC."],[],[],null,["# tensorflow::ops::ApplyAdagradDA Class Reference\n\ntensorflow::\nops::\nApplyAdagradDA\n=================================\n\n`\n#include \u003ctraining_ops.h\u003e\n`\n\n\nUpdate '\\*var' according to the proximal adagrad scheme.\n\nSummary\n-------\n\n\nArgs:\n\n- scope: A [Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- lr: Scaling factor. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\n\nOptional attributes (see\n`\n`[Attrs](/versions/r2.5/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)`\n`\n):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\n\nReturns:\n\n- `\n `[Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)`\n ` : Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| ` `[ApplyAdagradDA](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a9717622961f444da4444a7cad85c1147)` (const :: `[tensorflow::Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step) ` ||\n| ` `[ApplyAdagradDA](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a0176953b80b50c379313cad4ace5ee5e)` (const :: `[tensorflow::Scope](/versions/r2.5/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, :: `[tensorflow::Input](/versions/r2.5/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[ApplyAdagradDA::Attrs](/versions/r2.5/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)` & attrs) ` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|\n| ` `[operation](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1aeb5c4fba5cf1669a64c356f8beb3f37a)` ` | ` `[Operation](/versions/r2.5/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)` ` |\n| ` `[out](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1aa81832322b402afc32afca0e2663ba26)` ` | ` :: `[tensorflow::Output](/versions/r2.5/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output)` ` |\n\n| ### Public functions ||\n|---------------------------------------------------------------------------------------------------------------------------------|--------------------------|\n| ` `[node](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a6018e2f78356d28e62d64284d1da7e04)` () const ` | ` ::tensorflow::Node * ` |\n| ` `[operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a6561d70fc94fe24224939f3680880f4b)` () const ` | ` ` |\n| ` `[operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a50b9e5a00627be0d50ac540b4a762ed1)` () const ` | ` ` |\n\n| ### Public static functions ||\n|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|\n| ` `[UseLocking](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1afef1833b1630afd75a5b5c41a39b2ed1)` (bool x) ` | ` `[Attrs](/versions/r2.5/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)` ` |\n\n| ### Structs ||\n|-----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow:: ops:: ApplyAdagradDA:: Attrs](/versions/r2.5/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs) | Optional attribute setters for [ApplyAdagradDA](/versions/r2.5/api_docs/cc/class/tensorflow/ops/apply-adagrad-d-a#classtensorflow_1_1ops_1_1_apply_adagrad_d_a) . |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### ApplyAdagradDA\n\n```gdscript\n ApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### ApplyAdagradDA\n\n```gdscript\n ApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const ApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]