View source on GitHub |
Vectorized DP subclass of tf.compat.v1.train.AdamOptimizer
using Gaussian averaging.
tf_privacy.v1.VectorizedDPAdam(
l2_norm_clip, noise_multiplier, num_microbatches=None, *args, **kwargs
)
You can use this as a differentially private replacement for
tf.compat.v1.train.AdamOptimizer
. This optimizer implements DP-SGD using
the standard Gaussian mechanism. It differs from DPAdamGaussianOptimizer
in that
it attempts to vectorize the gradient computation and clipping of
microbatches.
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
AdamOptimizer
.
Examples:
# Create optimizer.
opt = VectorizedDPAdamOptimizer(l2_norm_clip=1.0, noise_multiplier=0.5, num_microbatches=1,
<standard arguments>)
When using the optimizer, be sure to pass in the loss as a rank-one tensor with one entry for each example.
# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
train_op = opt.minimize(loss, global_step=global_step)
Class Variables | |
---|---|
GATE_GRAPH |
2
|
GATE_NONE |
0
|
GATE_OP |
1
|