tf_privacy.v1.VectorizedDPAdam
Stay organized with collections
Save and categorize content based on your preferences.
Vectorized DP subclass of tf.compat.v1.train.AdamOptimizer
using Gaussian averaging.
tf_privacy.v1.VectorizedDPAdam(
l2_norm_clip, noise_multiplier, num_microbatches=None, *args, **kwargs
)
You can use this as a differentially private replacement for
tf.compat.v1.train.AdamOptimizer
. This optimizer implements DP-SGD using
the standard Gaussian mechanism. It differs from DPAdamGaussianOptimizer
in that
it attempts to vectorize the gradient computation and clipping of
microbatches.
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
AdamOptimizer
.
Examples:
# Create optimizer.
opt = VectorizedDPAdamOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,
<standard arguments>)
When using the optimizer, be sure to pass in the loss as a
rank-one tensor with one entry for each example.
# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
train_op = opt.minimize(loss, global_step=global_step)
Args |
l2_norm_clip
|
Clipping norm (max L2 norm of per microbatch gradients).
|
noise_multiplier
|
Ratio of the standard deviation to the clipping norm.
|
num_microbatches
|
Number of microbatches into which each minibatch is
split. If None , will default to the size of the minibatch, and
per-example gradients will be computed.
|
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Class Variables |
GATE_GRAPH
|
2
|
GATE_NONE
|
0
|
GATE_OP
|
1
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-02-16 UTC.
[null,null,["Last updated 2024-02-16 UTC."],[],[],null,["# tf_privacy.v1.VectorizedDPAdam\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/privacy/blob/v0.9.0/privacy/privacy/optimizers/dp_optimizer_vectorized.py#L44-L189) |\n\nVectorized DP subclass of [`tf.compat.v1.train.AdamOptimizer`](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/AdamOptimizer) using Gaussian averaging.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf_privacy.v1.VectorizedDPAdamOptimizer`](https://www.tensorflow.org/responsible_ai/privacy/api_docs/python/tf_privacy/v1/VectorizedDPAdam)\n\n\u003cbr /\u003e\n\n tf_privacy.v1.VectorizedDPAdam(\n l2_norm_clip, noise_multiplier, num_microbatches=None, *args, **kwargs\n )\n\nYou can use this as a differentially private replacement for\n[`tf.compat.v1.train.AdamOptimizer`](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/AdamOptimizer). This optimizer implements DP-SGD using\nthe standard Gaussian mechanism. It differs from `DPAdamGaussianOptimizer` in that\nit attempts to vectorize the gradient computation and clipping of\nmicrobatches.\n\nWhen instantiating this optimizer, you need to supply several\nDP-related arguments followed by the standard arguments for\n`AdamOptimizer`.\n\n#### Examples:\n\n # Create optimizer.\n opt = VectorizedDPAdamOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,\n \u003cstandard arguments\u003e)\n\nWhen using the optimizer, be sure to pass in the loss as a\nrank-one tensor with one entry for each example. \n\n # Compute loss as a tensor. Do not call tf.reduce_mean as you\n # would with a standard optimizer.\n loss = tf.nn.sparse_softmax_cross_entropy_with_logits(\n labels=labels, logits=logits)\n\n train_op = opt.minimize(loss, global_step=global_step)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `l2_norm_clip` | Clipping norm (max L2 norm of per microbatch gradients). |\n| `noise_multiplier` | Ratio of the standard deviation to the clipping norm. |\n| `num_microbatches` | Number of microbatches into which each minibatch is split. If `None`, will default to the size of the minibatch, and per-example gradients will be computed. |\n| `*args` | These will be passed on to the base class `__init__` method. |\n| `**kwargs` | These will be passed on to the base class `__init__` method. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|------------|-----|\n| GATE_GRAPH | `2` |\n| GATE_NONE | `0` |\n| GATE_OP | `1` |\n\n\u003cbr /\u003e"]]