সেভ করা পৃষ্ঠা গুছিয়ে রাখতে 'সংগ্রহ' ব্যবহার করুন
আপনার পছন্দ অনুযায়ী কন্টেন্ট সেভ করুন ও সঠিক বিভাগে রাখুন।
টেনসরফ্লো :: অপস:: SparseApplyAdagradDA
#include <training_ops.h>
প্রক্সিমাল অ্যাডাগ্রাড স্কিম অনুযায়ী '*var' এবং '*accum'-এ এন্ট্রি আপডেট করুন।
সারাংশ
যুক্তি:
- স্কোপ: একটি স্কোপ অবজেক্ট
- var: একটি পরিবর্তনশীল() থেকে হওয়া উচিত।
- gradient_accumulator: একটি পরিবর্তনশীল() থেকে হতে হবে।
- gradient_squared_accumulator: একটি পরিবর্তনশীল() থেকে হওয়া উচিত।
- grad: গ্রেডিয়েন্ট।
- সূচক: var এবং accum-এর প্রথম মাত্রায় সূচকগুলির একটি ভেক্টর।
- lr: শেখার হার। একটি স্কেলার হতে হবে।
- l1: L1 নিয়মিতকরণ। একটি স্কেলার হতে হবে।
- l2: L2 নিয়মিতকরণ। একটি স্কেলার হতে হবে।
- global_step: প্রশিক্ষণের ধাপ নম্বর। একটি স্কেলার হতে হবে।
ঐচ্ছিক বৈশিষ্ট্য (দেখুন Attrs
):
- use_locking: যদি সত্য হয়, var এবং accum tensors আপডেট করা একটি লক দ্বারা সুরক্ষিত হবে; অন্যথায় আচরণটি অনির্ধারিত, তবে কম বিরোধ প্রদর্শন করতে পারে।
রিটার্ন:
কনস্ট্রাক্টর এবং ডেস্ট্রাক্টর |
---|
SparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step)
|
SparseApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input indices, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const SparseApplyAdagradDA::Attrs & attrs) |
পাবলিক বৈশিষ্ট্য
পাবলিক ফাংশন
SparseApplyAdagradDA
SparseApplyAdagradDA(
const ::tensorflow::Scope & scope,
::tensorflow::Input var,
::tensorflow::Input gradient_accumulator,
::tensorflow::Input gradient_squared_accumulator,
::tensorflow::Input grad,
::tensorflow::Input indices,
::tensorflow::Input lr,
::tensorflow::Input l1,
::tensorflow::Input l2,
::tensorflow::Input global_step,
const SparseApplyAdagradDA::Attrs & attrs
)
নোড
::tensorflow::Node * node() const
operator::tensorflow::Input() const
অপারেটর::টেনসরফ্লো::আউটপুট
operator::tensorflow::Output() const
পাবলিক স্ট্যাটিক ফাংশন
লকিং ব্যবহার করুন
Attrs UseLocking(
bool x
)
অন্য কিছু উল্লেখ না করা থাকলে, এই পৃষ্ঠার কন্টেন্ট Creative Commons Attribution 4.0 License-এর অধীনে এবং কোডের নমুনাগুলি Apache 2.0 License-এর অধীনে লাইসেন্স প্রাপ্ত। আরও জানতে, Google Developers সাইট নীতি দেখুন। Java হল Oracle এবং/অথবা তার অ্যাফিলিয়েট সংস্থার রেজিস্টার্ড ট্রেডমার্ক।
2025-07-26 UTC-তে শেষবার আপডেট করা হয়েছে।
[null,null,["2025-07-26 UTC-তে শেষবার আপডেট করা হয়েছে।"],[],[],null,["# tensorflow::ops::SparseApplyAdagradDA Class Reference\n\ntensorflow::ops::SparseApplyAdagradDA\n=====================================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate entries in '\\*var' and '\\*accum' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r2.3/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- indices: A vector of indices into the first dimension of var and accum.\n- lr: Learning rate. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r2.3/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [SparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1afb1971e3dbb0f4a487a6069c9fdb15ee)`(const ::`[tensorflow::Scope](/versions/r2.3/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [SparseApplyAdagradDA](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a61642fb9a5da48aa72a60bc248dd3e03)`(const ::`[tensorflow::Scope](/versions/r2.3/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` indices, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r2.3/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[SparseApplyAdagradDA::Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a79d811cc083567eda56a62699a3a737c) | [Operation](/versions/r2.3/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1ae2d982a56df04499e51fd2bcd4d2eaf1) | `::`[tensorflow::Output](/versions/r2.3/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|------------------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1ade2e4f81a3f15e8fcf846c76a6755f95)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a9eca1ddc2bc2bfeb6b52f0381ded7681)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a2294c61087c20e5f137686b036811953)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|-----------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1a4c35b958f9150ddc168dc9c52e2b743e)`(bool x)` | [Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::SparseApplyAdagradDA::Attrs](/versions/r2.3/api_docs/cc/struct/tensorflow/ops/sparse-apply-adagrad-d-a/attrs) | Optional attribute setters for [SparseApplyAdagradDA](/versions/r2.3/api_docs/cc/class/tensorflow/ops/sparse-apply-adagrad-d-a#classtensorflow_1_1ops_1_1_sparse_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### SparseApplyAdagradDA\n\n```gdscript\n SparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### SparseApplyAdagradDA\n\n```gdscript\n SparseApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input indices,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const SparseApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]