Koleksiyonlar ile düzeninizi koruyun
İçeriği tercihlerinize göre kaydedin ve kategorilere ayırın.
tensor akışı:: işlem:: SparseApplyAdagrad
#include <training_ops.h>
Adagrad şemasına göre '*var' ve '*accum'daki ilgili girişleri güncelleyin.
Özet
Yani, derecelendirdiğimiz satırlar için var ve accum'u aşağıdaki gibi güncelleriz: $$accum += grad * grad$$ $$var -= lr * grad * (1 / sqrt(accum))$$
Argümanlar:
- kapsam: Bir Kapsam nesnesi
- var: Bir Variable()'dan olmalıdır.
- accum: Bir Variable()'dan olmalıdır.
- lr: Öğrenme oranı. Bir skaler olmalı.
- grad: Gradyan.
- indeksler: var ve accum'un ilk boyutuna ait indekslerin bir vektörü.
İsteğe bağlı özellikler (bkz. Attrs
):
- use_locking:
True
ise, var ve accum tensörlerinin güncellenmesi bir kilitle korunacaktır; aksi takdirde davranış tanımsızdır ancak daha az çekişme sergileyebilir.
İade:
Yapıcılar ve Yıkıcılar |
---|
SparseApplyAdagrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input accum, :: tensorflow::Input lr, :: tensorflow::Input grad, :: tensorflow::Input indices)
|
SparseApplyAdagrad (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input accum, :: tensorflow::Input lr, :: tensorflow::Input grad, :: tensorflow::Input indices, const SparseApplyAdagrad::Attrs & attrs) |
Genel özellikler
Kamu işlevleri
düğüm
::tensorflow::Node * node() const
operator::tensorflow::Input() const
operatör::tensorflow::Çıktı
operator::tensorflow::Output() const
Genel statik işlevler
Güncelleme Yuvaları
Attrs UpdateSlots(
bool x
)
KullanımKilitleme
Attrs UseLocking(
bool x
)
Aksi belirtilmediği sürece bu sayfanın içeriği Creative Commons Atıf 4.0 Lisansı altında ve kod örnekleri Apache 2.0 Lisansı altında lisanslanmıştır. Ayrıntılı bilgi için Google Developers Site Politikaları'na göz atın. Java, Oracle ve/veya satış ortaklarının tescilli ticari markasıdır.
Son güncelleme tarihi: 2025-07-26 UTC.
[null,null,["Son güncelleme tarihi: 2025-07-26 UTC."],[],[],null,["# tensorflow::ops::SparseApplyAdagrad Class Reference\n\ntensorflow::ops::SparseApplyAdagrad\n===================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate relevant entries in '\\*var' and '\\*accum' according to the adagrad scheme.\n\nSummary\n-------\n\nThat is for rows we have grad for, we update var and accum as follows: $$accum += grad \\* grad$$ $$var -= lr \\* grad \\* (1 / sqrt(accum))$$\n\nArguments:\n\n- scope: A [Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- accum: Should be from a Variable().\n- lr: Learning rate. Must be a scalar.\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs)):\n\n- use_locking: If `True`, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r2.0/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [SparseApplyAdagrad](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a8654c81ae7fb822d3d68cf07933298c5)`(const ::`[tensorflow::Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` accum, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices)` ||\n| [SparseApplyAdagrad](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a065426b919fd035ddb0cff7f0d0383b2)`(const ::`[tensorflow::Scope](/versions/r2.0/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` accum, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.0/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, const `[SparseApplyAdagrad::Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|--------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a583fccf8242cbba9ca0966f1f164f279) | [Operation](/versions/r2.0/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1acfcb53bfa0178d5f4531764444a70568) | `::`[tensorflow::Output](/versions/r2.0/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|--------------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a82b4ae6551f4a0d9456891da05823903)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1a1bd37515accb4c3505c3432dbcaaff2d)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1ab850cf3221b4383f0d773d8211173ac7)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|--------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UpdateSlots](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1afa53af54d646c0cd056c3e5d9ae19970)`(bool x)` | [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs) |\n| [UseLocking](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_1ab561ae2919f29d971d0c0b28448f1695)`(bool x)` | [Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::SparseApplyAdagrad::Attrs](/versions/r2.0/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad/attrs) | Optional attribute setters for [SparseApplyAdagrad](/versions/r2.0/api_docs/cc/class/tensorflow/ops/sparse-apply-adagrad#classtensorflow_1_1ops_1_1_sparse_apply_adagrad). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### SparseApplyAdagrad\n\n```gdscript\n SparseApplyAdagrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input accum,\n ::tensorflow::Input lr,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices\n)\n``` \n\n### SparseApplyAdagrad\n\n```gdscript\n SparseApplyAdagrad(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input accum,\n ::tensorflow::Input lr,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n const SparseApplyAdagrad::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UpdateSlots\n\n```text\nAttrs UpdateSlots(\n bool x\n)\n``` \n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]