tf_privacy.v1.DPAdamGaussianOptimizer
Stay organized with collections
Save and categorize content based on your preferences.
DP subclass of tf.compat.v1.train.AdamOptimizer
.
tf_privacy.v1.DPAdamGaussianOptimizer(
l2_norm_clip,
noise_multiplier,
num_microbatches=None,
unroll_microbatches=False,
*args,
**kwargs
)
You can use this as a differentially private replacement for
tf.compat.v1.train.AdamOptimizer
. This optimizer implements DP-SGD using
the standard Gaussian mechanism.
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
AdamOptimizer
.
Examples:
# Create optimizer.
opt = DPAdamGaussianOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,
<standard arguments>)
When using the optimizer, be sure to pass in the loss as a
rank-one tensor with one entry for each example.
# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
train_op = opt.minimize(loss, global_step=global_step)
Args |
l2_norm_clip
|
Clipping norm (max L2 norm of per microbatch gradients).
|
noise_multiplier
|
Ratio of the standard deviation to the clipping norm.
|
num_microbatches
|
Number of microbatches into which each minibatch is
split. If None , will default to the size of the minibatch, and
per-example gradients will be computed.
|
unroll_microbatches
|
If true, processes microbatches within a Python
loop instead of a tf.while_loop . Can be used if using a
tf.while_loop raises an exception.
|
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Methods
get_config
View source
get_config()
Creates configuration for Keras serialization.
This method will be called when Keras creates model checkpoints
and is necessary so that deserialization can be performed.
Returns |
A dict object storing arguments to be passed to the init method
upon deserialization.
|
Class Variables |
GATE_GRAPH
|
2
|
GATE_NONE
|
0
|
GATE_OP
|
1
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-02-16 UTC.
[null,null,["Last updated 2024-02-16 UTC."],[],[],null,["# tf_privacy.v1.DPAdamGaussianOptimizer\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/privacy/blob/v0.9.0/privacy/privacy/optimizers/dp_optimizer.py#L281-L368) |\n\nDP subclass of [`tf.compat.v1.train.AdamOptimizer`](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/AdamOptimizer). \n\n tf_privacy.v1.DPAdamGaussianOptimizer(\n l2_norm_clip,\n noise_multiplier,\n num_microbatches=None,\n unroll_microbatches=False,\n *args,\n **kwargs\n )\n\nYou can use this as a differentially private replacement for\n[`tf.compat.v1.train.AdamOptimizer`](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/AdamOptimizer). This optimizer implements DP-SGD using\nthe standard Gaussian mechanism.\n\nWhen instantiating this optimizer, you need to supply several\nDP-related arguments followed by the standard arguments for\n`AdamOptimizer`.\n\n#### Examples:\n\n # Create optimizer.\n opt = DPAdamGaussianOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,\n \u003cstandard arguments\u003e)\n\nWhen using the optimizer, be sure to pass in the loss as a\nrank-one tensor with one entry for each example. \n\n # Compute loss as a tensor. Do not call tf.reduce_mean as you\n # would with a standard optimizer.\n loss = tf.nn.sparse_softmax_cross_entropy_with_logits(\n labels=labels, logits=logits)\n\n train_op = opt.minimize(loss, global_step=global_step)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `l2_norm_clip` | Clipping norm (max L2 norm of per microbatch gradients). |\n| `noise_multiplier` | Ratio of the standard deviation to the clipping norm. |\n| `num_microbatches` | Number of microbatches into which each minibatch is split. If `None`, will default to the size of the minibatch, and per-example gradients will be computed. |\n| `unroll_microbatches` | If true, processes microbatches within a Python loop instead of a [`tf.while_loop`](https://www.tensorflow.org/api_docs/python/tf/while_loop). Can be used if using a [`tf.while_loop`](https://www.tensorflow.org/api_docs/python/tf/while_loop) raises an exception. |\n| `*args` | These will be passed on to the base class `__init__` method. |\n| `**kwargs` | These will be passed on to the base class `__init__` method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/privacy/blob/v0.9.0/privacy/privacy/optimizers/dp_optimizer.py#L350-L368) \n\n get_config()\n\nCreates configuration for Keras serialization.\n\nThis method will be called when Keras creates model checkpoints\nand is necessary so that deserialization can be performed.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A dict object storing arguments to be passed to the **init** method upon deserialization. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|------------|-----|\n| GATE_GRAPH | `2` |\n| GATE_NONE | `0` |\n| GATE_OP | `1` |\n\n\u003cbr /\u003e"]]