Stay organized with collections
Save and categorize content based on your preferences.
tensorflow::ops::SparseApplyAdagrad
#include <training_ops.h>
Update relevant entries in '*var' and '*accum' according to the adagrad scheme.
Summary
That is for rows we have grad for, we update var and accum as follows: $$accum += grad * grad$$ $$var -= lr * grad * (1 / sqrt(accum))$$
Arguments:
- scope: A Scope object
- var: Should be from a Variable().
- accum: Should be from a Variable().
- lr: Learning rate. Must be a scalar.
- grad: The gradient.
- indices: A vector of indices into the first dimension of var and accum.
Optional attributes (see Attrs
):
- use_locking: If
True
, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
Constructors and Destructors
|
SparseApplyAdagrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input accum, ::tensorflow::Input lr, ::tensorflow::Input grad, ::tensorflow::Input indices)
|
SparseApplyAdagrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input accum, ::tensorflow::Input lr, ::tensorflow::Input grad, ::tensorflow::Input indices, const SparseApplyAdagrad::Attrs & attrs)
|
Public attributes
Public functions
node
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
UpdateSlots
Attrs UpdateSlots(
bool x
)
UseLocking
Attrs UseLocking(
bool x
)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-04-20 UTC.
[null,null,["Last updated 2020-04-20 UTC."],[],[],null,["# tensorflow::ops::SparseApplyAdagrad Class Reference\n\ntensorflow::ops::SparseApplyAdagrad\n===================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate relevant entries in '\\*var' and '\\*accum' according to the adagrad scheme.\n\nSummary\n-------\n\nThat is for rows we have grad for, we update var and accum as follows: $$accum += grad \\* grad$$ $$var -= lr \\* grad \\* (1 / sqrt(accum))$$\n\nArguments:\n\n- scope: A [Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- accum: Should be from a Variable().\n- lr: Learning rate. Must be a scalar.\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r1.15/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [SparseApplyAdagrad](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a8654c81ae7fb822d3d68cf07933298c5)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` accum, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices)` ||\n| [SparseApplyAdagrad](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a065426b919fd035ddb0cff7f0d0383b2)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` accum, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, const `[SparseApplyAdagrad::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a583fccf8242cbba9ca0966f1f164f279) | [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1acfcb53bfa0178d5f4531764444a70568) | `::`[tensorflow::Output](/versions/r1.15/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|--------------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a82b4ae6551f4a0d9456891da05823903)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a1bd37515accb4c3505c3432dbcaaff2d)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1ab850cf3221b4383f0d773d8211173ac7)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UpdateSlots](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1afa53af54d646c0cd056c3e5d9ae19970)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs) |\n| [UseLocking](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1ab561ae2919f29d971d0c0b28448f1695)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs) |\n\n| ### Structs ||\n|----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::SparseApplyAdagrad::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs) | Optional attribute setters for [SparseApplyAdagrad](/versions/r1.15/api_docs/cc/class/tensorflow/ops/sparse-apply-adagrad#classtensorflow_1_1ops_1_1_sparse_apply_adagrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### SparseApplyAdagrad\n\n```gdscript\n SparseApplyAdagrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input accum,\n ::tensorflow::Input lr,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices\n)\n``` \n\n### SparseApplyAdagrad\n\n```gdscript\n SparseApplyAdagrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input accum,\n ::tensorflow::Input lr,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n const SparseApplyAdagrad::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UpdateSlots\n\n```text\nAttrs UpdateSlots(\n bool x\n)\n``` \n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]