コレクションでコンテンツを整理
必要に応じて、コンテンツの保存と分類を行います。
テンソルフロー::作戦::リソースSparseApplyAdagradDA
#include <training_ops.h>
近位の adagrad スキームに従って、「*var」と「*accum」のエントリを更新します。
まとめ
引数:
- スコープ:スコープオブジェクト
- var: Variable() から取得する必要があります。
- gradient_accumulator: Variable() から取得する必要があります。
- gradient_squared_accumulator: Variable() から取得する必要があります。
- grad: グラデーション。
- indices: var と accum の最初の次元へのインデックスのベクトル。
- lr: 学習率。スカラーでなければなりません。
- l1: L1 正則化。スカラーでなければなりません。
- l2: L2 正則化。スカラーでなければなりません。
- global_step: トレーニング ステップ番号。スカラーでなければなりません。
オプションの属性 ( Attrs
を参照):
- use_locking: True の場合、var テンソルと accum テンソルの更新はロックによって保護されます。それ以外の場合、動作は未定義ですが、競合が少なくなる可能性があります。
戻り値:
コンストラクターとデストラクター |
---|
ResourceSparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step)
|
ResourceSparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const ResourceSparseApplyAdagradDA::Attrs & attrs) |
パブリック属性
公共機能
リソースSparseApplyAdagradDA
ResourceSparseApplyAdagradDA(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input gradient_accumulator,
::tensorflow::Input gradient_squared_accumulator,
::tensorflow::Input grad,
::tensorflow::Input indices,
::tensorflow::Input lr,
::tensorflow::Input l1,
::tensorflow::Input l2,
::tensorflow::Input global_step,
const ResourceSparseApplyAdagradDA::Attrs & attrs
)
演算子::tensorflow::オペレーション
operator::tensorflow::Operation() const
パブリック静的関数
ロックを使用する
Attrs UseLocking(
bool x
)
特に記載のない限り、このページのコンテンツはクリエイティブ・コモンズの表示 4.0 ライセンスにより使用許諾されます。コードサンプルは Apache 2.0 ライセンスにより使用許諾されます。詳しくは、Google Developers サイトのポリシーをご覧ください。Java は Oracle および関連会社の登録商標です。
最終更新日 2025-07-26 UTC。
[null,null,["最終更新日 2025-07-26 UTC。"],[],[],null,["# tensorflow::ops::ResourceSparseApplyAdagradDA Class Reference\n\ntensorflow::ops::ResourceSparseApplyAdagradDA\n=============================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate entries in '\\*var' and '\\*accum' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n- lr: Learning rate. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceSparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1a73ffdbfa10ec272bfe2b93f579109c28)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [ResourceSparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1aae8dd33efc6b892fa94673149d533534)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[ResourceSparseApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|---------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1af19182343e1d08847c2cf51d1ae38840) | [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|------------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1a2d06e93c04e37fbab24141d187d63698)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1aef5e4815d60b62a5862a6a8427f5d3ae)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceSparseApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad-d-a/attrs) | Optional attribute setters for [ResourceSparseApplyAdagradDA](/versions/r1.15/api_docs/cc/class/tensorflow/ops/resource-sparse-apply-adagrad-d-a#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceSparseApplyAdagradDA\n\n```gdscript\n ResourceSparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### ResourceSparseApplyAdagradDA\n\n```gdscript\n ResourceSparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const ResourceSparseApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]