View source on GitHub |
DP subclass of tf.compat.v1.train.AdamOptimizer
.
tf_privacy.v1.DPAdamGaussianOptimizer(
l2_norm_clip,
noise_multiplier,
num_microbatches=None,
unroll_microbatches=False,
*args,
**kwargs
)
You can use this as a differentially private replacement for
tf.compat.v1.train.AdamOptimizer
. This optimizer implements DP-SGD using
the standard Gaussian mechanism.
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
AdamOptimizer
.
Examples:
# Create optimizer.
opt = DPAdamGaussianOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,
<standard arguments>)
When using the optimizer, be sure to pass in the loss as a rank-one tensor with one entry for each example.
# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
train_op = opt.minimize(loss, global_step=global_step)
Args | |
---|---|
l2_norm_clip
|
Clipping norm (max L2 norm of per microbatch gradients). |
noise_multiplier
|
Ratio of the standard deviation to the clipping norm. |
num_microbatches
|
Number of microbatches into which each minibatch is
split. If None , will default to the size of the minibatch, and
per-example gradients will be computed.
|
unroll_microbatches
|
If true, processes microbatches within a Python
loop instead of a tf.while_loop . Can be used if using a
tf.while_loop raises an exception.
|
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Methods
get_config
get_config()
Creates configuration for Keras serialization.
This method will be called when Keras creates model checkpoints and is necessary so that deserialization can be performed.
Returns | |
---|---|
A dict object storing arguments to be passed to the init method upon deserialization. |
Class Variables | |
---|---|
GATE_GRAPH |
2
|
GATE_NONE |
0
|
GATE_OP |
1
|