تدفق التوتر:: العمليات:: تطبيقAdagradDA
#include <training_ops.h>
قم بتحديث '*var' وفقًا لمخطط adagrad القريب.
ملخص
الحجج:
- النطاق: كائن النطاق
- فار: يجب أن يكون من متغير ().
- gradient_accumulator: يجب أن يكون من متغير ().
- gradient_squared_accumulator: يجب أن يكون من متغير ().
- غراد: التدرج.
- lr: عامل التحجيم. يجب أن يكون العددية.
- l1: تسوية L1. يجب أن يكون العددية.
- l2: تسوية L2. يجب أن يكون العددية.
- global_step: رقم خطوة التدريب. يجب أن يكون العددية.
السمات الاختيارية (انظر Attrs
):
- use_locking: إذا كان True، فسيتم حماية تحديث موتر var وaccum بواسطة قفل؛ وإلا فإن السلوك غير محدد، ولكنه قد يحمل قدرًا أقل من الخلاف.
العوائد:
-
Output
: نفس "فار".
البنائين والمدمرين | |
---|---|
ApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step) | |
ApplyAdagradDA (const :: tensorflow::Scope & scope, :: tensorflow::Input var, :: tensorflow::Input gradient_accumulator, :: tensorflow::Input gradient_squared_accumulator, :: tensorflow::Input grad, :: tensorflow::Input lr, :: tensorflow::Input l1, :: tensorflow::Input l2, :: tensorflow::Input global_step, const ApplyAdagradDA::Attrs & attrs) |
الصفات العامة | |
---|---|
operation | |
out |
الوظائف العامة | |
---|---|
node () const | ::tensorflow::Node * |
operator::tensorflow::Input () const | |
operator::tensorflow::Output () const |
وظائف ثابتة العامة | |
---|---|
UseLocking (bool x) |
الهياكل | |
---|---|
Tensorflow:: ops:: ApplyAdagradDA:: Attrs | محددات السمات الاختيارية لـ ApplyAdagradDA . |
الصفات العامة
عملية
Operation operation
خارج
::tensorflow::Output out
الوظائف العامة
تطبيقAdagradDA
ApplyAdagradDA( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input gradient_accumulator, ::tensorflow::Input gradient_squared_accumulator, ::tensorflow::Input grad, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input global_step )
تطبيقAdagradDA
ApplyAdagradDA( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input gradient_accumulator, ::tensorflow::Input gradient_squared_accumulator, ::tensorflow::Input grad, ::tensorflow::Input lr, ::tensorflow::Input l1, ::tensorflow::Input l2, ::tensorflow::Input global_step, const ApplyAdagradDA::Attrs & attrs )
العقدة
::tensorflow::Node * node() const
المشغل::tensorflow::الإدخال
operator::tensorflow::Input() const
المشغل::tensorflow::الإخراج
operator::tensorflow::Output() const
وظائف ثابتة العامة
UseLocking
Attrs UseLocking( bool x )
إنّ محتوى هذه الصفحة مرخّص بموجب ترخيص Creative Commons Attribution 4.0 ما لم يُنصّ على خلاف ذلك، ونماذج الرموز مرخّصة بموجب ترخيص Apache 2.0. للاطّلاع على التفاصيل، يُرجى مراجعة سياسات موقع Google Developers. إنّ Java هي علامة تجارية مسجَّلة لشركة Oracle و/أو شركائها التابعين.
تاريخ التعديل الأخير: 2025-07-26 (حسب التوقيت العالمي المتفَّق عليه)
[null,null,["تاريخ التعديل الأخير: 2025-07-26 (حسب التوقيت العالمي المتفَّق عليه)"],[],[],null,["# tensorflow::ops::ApplyAdagradDA Class Reference\n\ntensorflow::ops::ApplyAdagradDA\n===============================\n\n`#include \u003ctraining_ops.h\u003e`\n\nUpdate '\\*var' according to the proximal adagrad scheme.\n\nSummary\n-------\n\nArguments:\n\n- scope: A [Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope) object\n- var: Should be from a Variable().\n- gradient_accumulator: Should be from a Variable().\n- gradient_squared_accumulator: Should be from a Variable().\n- grad: The gradient.\n- lr: Scaling factor. Must be a scalar.\n- l1: L1 regularization. Must be a scalar.\n- l2: L2 regularization. Must be a scalar.\n- global_step: Training step number. Must be a scalar.\n\n\u003cbr /\u003e\n\nOptional attributes (see [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)):\n\n- use_locking: If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.\n\n\u003cbr /\u003e\n\nReturns:\n\n- [Output](/versions/r1.15/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output): Same as \"var\".\n\n\u003cbr /\u003e\n\n| ### Constructors and Destructors ||\n|---|---|\n| [ApplyAdagradDA](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a9717622961f444da4444a7cad85c1147)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step)` ||\n| [ApplyAdagradDA](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a0176953b80b50c379313cad4ace5ee5e)`(const ::`[tensorflow::Scope](/versions/r1.15/api_docs/cc/class/tensorflow/scope#classtensorflow_1_1_scope)` & scope, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` var, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` gradient_squared_accumulator, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` grad, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` lr, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l1, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` l2, ::`[tensorflow::Input](/versions/r1.15/api_docs/cc/class/tensorflow/input#classtensorflow_1_1_input)` global_step, const `[ApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs)` & attrs)` ||\n\n| ### Public attributes ||\n|-----------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|\n| [operation](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1aeb5c4fba5cf1669a64c356f8beb3f37a) | [Operation](/versions/r1.15/api_docs/cc/class/tensorflow/operation#classtensorflow_1_1_operation) |\n| [out](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1aa81832322b402afc32afca0e2663ba26) | `::`[tensorflow::Output](/versions/r1.15/api_docs/cc/class/tensorflow/output#classtensorflow_1_1_output) |\n\n| ### Public functions ||\n|-----------------------------------------------------------------------------------------------------------------------------|------------------------|\n| [node](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a6018e2f78356d28e62d64284d1da7e04)`() const ` | `::tensorflow::Node *` |\n| [operator::tensorflow::Input](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a6561d70fc94fe24224939f3680880f4b)`() const ` | ` ` ` ` |\n| [operator::tensorflow::Output](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1a50b9e5a00627be0d50ac540b4a762ed1)`() const ` | ` ` ` ` |\n\n| ### Public static functions ||\n|----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|\n| [UseLocking](#classtensorflow_1_1ops_1_1_apply_adagrad_d_a_1afef1833b1630afd75a5b5c41a39b2ed1)`(bool x)` | [Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs#structtensorflow_1_1ops_1_1_apply_adagrad_d_a_1_1_attrs) |\n\n| ### Structs ||\n|---------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [tensorflow::ops::ApplyAdagradDA::Attrs](/versions/r1.15/api_docs/cc/struct/tensorflow/ops/apply-adagrad-d-a/attrs) | Optional attribute setters for [ApplyAdagradDA](/versions/r1.15/api_docs/cc/class/tensorflow/ops/apply-adagrad-d-a#classtensorflow_1_1ops_1_1_apply_adagrad_d_a). |\n\nPublic attributes\n-----------------\n\n### operation\n\n```text\nOperation operation\n``` \n\n### out\n\n```text\n::tensorflow::Output out\n``` \n\nPublic functions\n----------------\n\n### ApplyAdagradDA\n\n```gdscript\n ApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step\n)\n``` \n\n### ApplyAdagradDA\n\n```gdscript\n ApplyAdagradDA(\n const ::tensorflow::Scope & scope,\n ::tensorflow::Input var,\n ::tensorflow::Input gradient_accumulator,\n ::tensorflow::Input gradient_squared_accumulator,\n ::tensorflow::Input grad,\n ::tensorflow::Input lr,\n ::tensorflow::Input l1,\n ::tensorflow::Input l2,\n ::tensorflow::Input global_step,\n const ApplyAdagradDA::Attrs & attrs\n)\n``` \n\n### node\n\n```gdscript\n::tensorflow::Node * node() const \n``` \n\n### operator::tensorflow::Input\n\n```gdscript\n operator::tensorflow::Input() const \n``` \n\n### operator::tensorflow::Output\n\n```gdscript\n operator::tensorflow::Output() const \n``` \n\nPublic static functions\n-----------------------\n\n### UseLocking\n\n```text\nAttrs UseLocking(\n bool x\n)\n```"]]