tf_privacy.DPFTRLTreeAggregationOptimizer
Stay organized with collections
Save and categorize content based on your preferences.
Returns a DPOptimizerClass
cls
using the TreeAggregationQuery
.
tf_privacy.DPFTRLTreeAggregationOptimizer(
l2_norm_clip: float,
noise_multiplier: float,
var_list_or_model: Union[_VarListType, tf.keras.Model],
num_microbatches: Optional[int] = None,
gradient_accumulation_steps: int = 1,
restart_period: Optional[int] = None,
restart_warmup: Optional[int] = None,
noise_seed: Optional[int] = None,
*args,
**kwargs
)
Combining this query with a SGD optimizer can be used to implement the
DP-FTRL algorithm in
"Practical and Private (Deep) Learning without Sampling or Shuffling".
This function is a thin wrapper around
make_keras_optimizer_class.<locals>.DPOptimizerClass
which can be used to
apply a TreeAggregationQuery
to any DPOptimizerClass
.
Args |
l2_norm_clip
|
Clipping norm (max L2 norm of per microbatch gradients).
|
noise_multiplier
|
Ratio of the standard deviation to the clipping norm.
|
var_list_or_model
|
Either a tf.keras.Model or a list of tf.variables from
which tf.TensorSpec s can be defined. These specify the structure and
shapes of records (gradients).
|
num_microbatches
|
Number of microbatches into which each minibatch is
split. Default is None which means that number of microbatches is
equal to batch size (i.e. each microbatch contains exactly one example).
If gradient_accumulation_steps is greater than 1 and
num_microbatches is not None then the effective number of
microbatches is equal to num_microbatches *
gradient_accumulation_steps .
|
gradient_accumulation_steps
|
If greater than 1 then optimizer will be
accumulating gradients for this number of optimizer steps before
applying them to update model weights. If this argument is set to 1 then
updates will be applied on each optimizer step.
|
restart_period
|
(Optional) Restart wil occur after restart_period steps.
The default (None) means there will be no periodic restarts. Must be a
positive integer. If restart_warmup is passed, this only applies to
the second restart and onwards and must be not None.
|
restart_warmup
|
(Optional) The first restart will occur after
restart_warmup steps. The default (None) means no warmup. Must be an
integer in the range [1, restart_period - 1].
|
noise_seed
|
(Optional) Integer seed for the Gaussian noise generator. If
None , a nondeterministic seed based on system time will be generated.
|
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Raise |
ValueError
|
If restart_warmup is not None and restart_period is None.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-02-16 UTC.
[null,null,["Last updated 2024-02-16 UTC."],[],[],null,["# tf_privacy.DPFTRLTreeAggregationOptimizer\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/privacy/blob/v0.9.0/privacy/privacy/optimizers/dp_optimizer_keras.py#L481-L565) |\n\nReturns a `DPOptimizerClass` `cls` using the `TreeAggregationQuery`. \n\n tf_privacy.DPFTRLTreeAggregationOptimizer(\n l2_norm_clip: float,\n noise_multiplier: float,\n var_list_or_model: Union[_VarListType, tf.keras.Model],\n num_microbatches: Optional[int] = None,\n gradient_accumulation_steps: int = 1,\n restart_period: Optional[int] = None,\n restart_warmup: Optional[int] = None,\n noise_seed: Optional[int] = None,\n *args,\n **kwargs\n )\n\nCombining this query with a SGD optimizer can be used to implement the\nDP-FTRL algorithm in\n\"Practical and Private (Deep) Learning without Sampling or Shuffling\".\n\nThis function is a thin wrapper around\n`make_keras_optimizer_class.\u003clocals\u003e.DPOptimizerClass` which can be used to\napply a `TreeAggregationQuery` to any `DPOptimizerClass`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `l2_norm_clip` | Clipping norm (max L2 norm of per microbatch gradients). |\n| `noise_multiplier` | Ratio of the standard deviation to the clipping norm. |\n| `var_list_or_model` | Either a tf.keras.Model or a list of tf.variables from which [`tf.TensorSpec`](https://www.tensorflow.org/api_docs/python/tf/TensorSpec)s can be defined. These specify the structure and shapes of records (gradients). |\n| `num_microbatches` | Number of microbatches into which each minibatch is split. Default is `None` which means that number of microbatches is equal to batch size (i.e. each microbatch contains exactly one example). If `gradient_accumulation_steps` is greater than 1 and `num_microbatches` is not `None` then the effective number of microbatches is equal to `num_microbatches * gradient_accumulation_steps`. |\n| `gradient_accumulation_steps` | If greater than 1 then optimizer will be accumulating gradients for this number of optimizer steps before applying them to update model weights. If this argument is set to 1 then updates will be applied on each optimizer step. |\n| `restart_period` | (Optional) Restart wil occur after `restart_period` steps. The default (None) means there will be no periodic restarts. Must be a positive integer. If `restart_warmup` is passed, this only applies to the second restart and onwards and must be not None. |\n| `restart_warmup` | (Optional) The first restart will occur after `restart_warmup` steps. The default (None) means no warmup. Must be an integer in the range \\[1, `restart_period` - 1\\]. |\n| `noise_seed` | (Optional) Integer seed for the Gaussian noise generator. If `None`, a nondeterministic seed based on system time will be generated. |\n| `*args` | These will be passed on to the base class `__init__` method. |\n| `**kwargs` | These will be passed on to the base class `__init__` method. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raise ----- ||\n|--------------|-----------------------------------------------------------|\n| `ValueError` | If restart_warmup is not None and restart_period is None. |\n\n\u003cbr /\u003e"]]