Koleksiyonlar ile düzeninizi koruyun
İçeriği tercihlerinize göre kaydedin ve kategorilere ayırın.
tensor akışı:: işlem:: KaynakSparseApplyAdagrad
#include <training_ops.h>
Adagrad şemasına göre '*var' ve '*accum'daki ilgili girişleri güncelleyin.
Özet
Yani grad kullandığımız satırlar için var ve accum'u şu şekilde güncelliyoruz: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Argümanlar:
- kapsam: Bir Kapsam nesnesi
- var: Bir Variable()'dan olmalıdır.
- accum: Bir Variable()'dan olmalıdır.
- lr: Öğrenme oranı. Bir skaler olmalı.
- grad: Gradyan.
- indeksler: var ve accum'un ilk boyutuna ait indekslerin bir vektörü.
İsteğe bağlı özellikler (bkz. Attrs
):
- use_locking:
True
ise, var ve accum tensörlerinin güncellenmesi bir kilitle korunacaktır; aksi takdirde davranış tanımsızdır ancak daha az çekişme sergileyebilir.
İade:
Yapıcılar ve Yıkıcılar |
---|
ResourceSparseApplyAdagrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input accum, :: tensorflow::Input lr, :: tensorflow::Input grad, :: tensorflow::Input indices)
|
ResourceSparseApplyAdagrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input accum, :: tensorflow::Input lr, :: tensorflow::Input grad, :: tensorflow::Input indices, const ResourceSparseApplyAdagrad::Attrs & attrs) |
Genel özellikler
Kamu işlevleri
operatör::tensorflow::İşlem
operator::tensorflow::Operation() const
Genel statik işlevler
Güncelleme Yuvaları
Attrs UpdateSlots(
bool x
)
KullanımKilitleme
Attrs UseLocking(
bool x
)
Aksi belirtilmediği sürece bu sayfanın içeriği Creative Commons Atıf 4.0 Lisansı altında ve kod örnekleri Apache 2.0 Lisansı altında lisanslanmıştır. Ayrıntılı bilgi için Google Developers Site Politikaları'na göz atın. Java, Oracle ve/veya satış ortaklarının tescilli ticari markasıdır.
Son güncelleme tarihi: 2025-07-27 UTC.
[null,null,["Son güncelleme tarihi: 2025-07-27 UTC."],[],[],null,["# tensorflow::ops::ResourceSparseApplyAdagrad Class Reference\n\ntensorflow::ops::ResourceSparseApplyAdagrad\n===========================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate relevant entries in '\\*var' and '\\*accum' according to the adagrad scheme.\n\nSummary\n-------\n\nThat is for rows we have grad for, we update var and accum as follows: accum += grad \\* grad var -= lr \\* grad \\* (1 / sqrt(accum))\n\nArguments:\n\n- scope: A [Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- accum: Should be from a Variable().\n- lr: Learning rate. Must be a scalar.\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- the created [Operation](/versions/r2.2/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation)\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ResourceSparseApplyAdagrad](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1a3ecfebc42a69601af17e27c4f487996a)`(const ::`[tensorflow::Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` accum, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices)` ||\n| [ResourceSparseApplyAdagrad](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1a88b42cc212cd10a0b52d433a9116ee59)`(const ::`[tensorflow::Scope](/versions/r2.2/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` accum, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.2/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, const `[ResourceSparseApplyAdagrad::Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1a917b533fe528936609ff652edec54b97) | [Operation](/versions/r2.2/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n\n| ### Public functions ||\n|--------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| [operator::tensorflow::Operation](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1ab312e9a5253a41e2d2a895f9c50a9b17)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UpdateSlots](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1a8e0a9ebe58e73522e657cd3fa6d2f4e1)`(bool x)` | [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1_1_attrs) |\n| [UseLocking](#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1ab305f9b0860b1d2a24a1d314d486ed82)`(bool x)` | [Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad_1_1_attrs) |\n\n| ### Structs ||\n|--------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ResourceSparseApplyAdagrad::Attrs](/versions/r2.2/api_docs/cc/struct/tensorflow/ops/resource-sparse-apply-adagrad/attrs) | Optional attribute setters for [ResourceSparseApplyAdagrad](/versions/r2.2/api_docs/cc/class/tensorflow/ops/resource-sparse-apply-adagrad#classtensorflow_1_1ops_1_1_resource_sparse_apply_adagrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\nPublic functions\n----------------\n\n### ResourceSparseApplyAdagrad\n\n```gdscript\n ResourceSparseApplyAdagrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input accum,\n ::tensorflow::Input lr,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices\n)\n``` \n\n### ResourceSparseApplyAdagrad\n\n```gdscript\n ResourceSparseApplyAdagrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input accum,\n ::tensorflow::Input lr,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n const ResourceSparseApplyAdagrad::Attrs & attrs\n)\n``` \n\n### operator::tensorflow::Operation\n\n```gdscript\n operator::tensorflow::Operation() const \n``` \n\nPublic static functions\n-----------------------\n\n### UpdateSlots\n\n```text\nAttrs UpdateSlots(\n bool x\n)\n``` \n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]