tf_privacy.DPKerasAdamOptimizer
Stay organized with collections
Save and categorize content based on your preferences.
Returns a DPOptimizerClass
cls
using the GaussianSumQuery
.
tf_privacy.DPKerasAdamOptimizer(
l2_norm_clip: float,
noise_multiplier: float,
num_microbatches: Optional[int] = None,
gradient_accumulation_steps: int = 1,
*args,
**kwargs
)
This function is a thin wrapper around
make_keras_optimizer_class.<locals>.DPOptimizerClass
which can be used to
apply a GaussianSumQuery
to any DPOptimizerClass
.
When combined with stochastic gradient descent, this creates the canonical
DP-SGD algorithm of "Deep Learning with Differential Privacy"
(see https://arxiv.org/abs/1607.00133).
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
{short_base_class}
.
As an example, see the below or the documentation of the DPOptimizerClass.
# Create optimizer.
opt = {dp_keras_class}(l2_norm_clip=1.0, noise_multiplier=0.5,
num_microbatches=1, <standard arguments>)
Args |
l2_norm_clip
|
Clipping norm (max L2 norm of per microbatch gradients).
|
noise_multiplier
|
Ratio of the standard deviation to the clipping norm.
|
num_microbatches
|
Number of microbatches into which each minibatch is
split. Default is None which means that number of microbatches is
equal to batch size (i.e. each microbatch contains exactly one example).
If gradient_accumulation_steps is greater than 1 and
num_microbatches is not None then the effective number of
microbatches is equal to num_microbatches *
gradient_accumulation_steps .
|
gradient_accumulation_steps
|
If greater than 1 then optimizer will be
accumulating gradients for this number of optimizer steps before
applying them to update model weights. If this argument is set to 1 then
updates will be applied on each optimizer step.
|
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-02-16 UTC.
[null,null,["Last updated 2024-02-16 UTC."],[],[],null,["# tf_privacy.DPKerasAdamOptimizer\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/privacy/blob/v0.9.0/privacy/privacy/optimizers/dp_optimizer_keras.py#L416-L469) |\n\nReturns a `DPOptimizerClass` `cls` using the `GaussianSumQuery`. \n\n tf_privacy.DPKerasAdamOptimizer(\n l2_norm_clip: float,\n noise_multiplier: float,\n num_microbatches: Optional[int] = None,\n gradient_accumulation_steps: int = 1,\n *args,\n **kwargs\n )\n\nThis function is a thin wrapper around\n`make_keras_optimizer_class.\u003clocals\u003e.DPOptimizerClass` which can be used to\napply a `GaussianSumQuery` to any `DPOptimizerClass`.\n\nWhen combined with stochastic gradient descent, this creates the canonical\nDP-SGD algorithm of \"Deep Learning with Differential Privacy\"\n(see \u003chttps://arxiv.org/abs/1607.00133\u003e).\n\nWhen instantiating this optimizer, you need to supply several\nDP-related arguments followed by the standard arguments for\n`{short_base_class}`.\n\nAs an example, see the below or the documentation of the DPOptimizerClass. \n\n # Create optimizer.\n opt = {dp_keras_class}(l2_norm_clip=1.0, noise_multiplier=0.5,\n num_microbatches=1, \u003cstandard arguments\u003e)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `l2_norm_clip` | Clipping norm (max L2 norm of per microbatch gradients). |\n| `noise_multiplier` | Ratio of the standard deviation to the clipping norm. |\n| `num_microbatches` | Number of microbatches into which each minibatch is split. Default is `None` which means that number of microbatches is equal to batch size (i.e. each microbatch contains exactly one example). If `gradient_accumulation_steps` is greater than 1 and `num_microbatches` is not `None` then the effective number of microbatches is equal to `num_microbatches * gradient_accumulation_steps`. |\n| `gradient_accumulation_steps` | If greater than 1 then optimizer will be accumulating gradients for this number of optimizer steps before applying them to update model weights. If this argument is set to 1 then updates will be applied on each optimizer step. |\n| `*args` | These will be passed on to the base class `__init__` method. |\n| `**kwargs` | These will be passed on to the base class `__init__` method. |\n\n\u003cbr /\u003e"]]