コレクションでコンテンツを整理
必要に応じて、コンテンツの保存と分類を行います。
テンソルフロー::作戦:: SparseApplyAdagradDA
#include <training_ops.h>
近位の adagrad スキームに従って、「*var」と「*accum」のエントリを更新します。
まとめ
引数:
- スコープ:スコープオブジェクト
- var: Variable() から取得する必要があります。
- gradient_accumulator: Variable() から取得する必要があります。
- gradient_squared_accumulator: Variable() から取得する必要があります。
- grad: グラデーション。
- indices: var と accum の最初の次元へのインデックスのベクトル。
- lr: 学習率。スカラーでなければなりません。
- l1: L1 正則化。スカラーでなければなりません。
- l2: L2 正則化。スカラーでなければなりません。
- global_step: トレーニング ステップ番号。スカラーでなければなりません。
オプションの属性 ( Attrs
を参照):
- use_locking: True の場合、var テンソルと accum テンソルの更新はロックによって保護されます。それ以外の場合、動作は未定義ですが、競合が少なくなる可能性があります。
戻り値:
コンストラクターとデストラクター |
---|
SparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step)
|
SparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const SparseApplyAdagradDA::Attrs & attrs) |
パブリック属性
公共機能
SparseApplyAdagradDA
SparseApplyAdagradDA(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input gradient_accumulator,
::tensorflow::Input gradient_squared_accumulator,
::tensorflow::Input grad,
::tensorflow::Input indices,
::tensorflow::Input lr,
::tensorflow::Input l1,
::tensorflow::Input l2,
::tensorflow::Input global_step,
const SparseApplyAdagradDA::Attrs & attrs
)
ノード
::tensorflow::Node * node() const
operator::tensorflow::Input() const
演算子::tensorflow::出力
operator::tensorflow::Output() const
パブリック静的関数
ロックを使用する
Attrs UseLocking(
bool x
)
特に記載のない限り、このページのコンテンツはクリエイティブ・コモンズの表示 4.0 ライセンスにより使用許諾されます。コードサンプルは Apache 2.0 ライセンスにより使用許諾されます。詳しくは、Google Developers サイトのポリシーをご覧ください。Java は Oracle および関連会社の登録商標です。
最終更新日 2025-07-26 UTC。
[null,null,["最終更新日 2025-07-26 UTC。"],[],[],null,["# tensorflow::ops::SparseApplyAdagradDA Class Reference\n\ntensorflow::ops::SparseApplyAdagradDA\n=====================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate entries in '\\*var' and '\\*accum' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r2.3/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n- lr: Learning rate. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r2.3/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [SparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1afb1971e3dbb0f4a487a6069c9fdb15ee)`(const ::`[tensorflow::Scope](/versions/r2.3/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [SparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a61642fb9a5da48aa72a60bc248dd3e03)`(const ::`[tensorflow::Scope](/versions/r2.3/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[SparseApplyAdagradDA::Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a79d811cc083567eda56a62699a3a737c) | [Operation](/versions/r2.3/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1ae2d982a56df04499e51fd2bcd4d2eaf1) | `::`[tensorflow::Output](/versions/r2.3/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|------------------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1ade2e4f81a3f15e8fcf846c76a6755f95)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a9eca1ddc2bc2bfeb6b52f0381ded7681)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a2294c61087c20e5f137686b036811953)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-----------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a4c35b958f9150ddc168dc9c52e2b743e)`(bool x)` | [Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::SparseApplyAdagradDA::Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs) | Optional attribute setters for [SparseApplyAdagradDA](/versions/r2.3/api_docs/cc/class/tensorflow/ops/sparse-apply-adagrad-d-a#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### SparseApplyAdagradDA\n\n```gdscript\n SparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### SparseApplyAdagradDA\n\n```gdscript\n SparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const SparseApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]