View source on GitHub |
Differentially private subclass of tf.compat.v1.train.GradientDescentOptimizer
.
tf_privacy.v1.DPGradientDescentOptimizer(
dp_sum_query,
num_microbatches=None,
unroll_microbatches=False,
while_loop_parallel_iterations=10,
*args,
**kwargs
)
You can use this as a differentially private replacement for
tf.compat.v1.train.GradientDescentOptimizer
. Note that you must ensure
that any loss processed by this optimizer comes in vector
form.
This is the fully general form of the optimizer that allows you
to define your own privacy mechanism. If you are planning to use
the standard Gaussian mechanism, it is simpler to use the more
specific DPGradientDescentGaussianOptimizer
class instead.
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
GradientDescentOptimizer
.
Examples:
# Create GaussianSumQuery.
dp_sum_query = gaussian_query.GaussianSumQuery(l2_norm_clip=1.0, stddev=0.5)
# Create optimizer.
opt = DPGradientDescentOptimizer(dp_sum_query, 1, False, <standard arguments>)
When using the optimizer, be sure to pass in the loss as a rank-one tensor with one entry for each example.
# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
train_op = opt.minimize(loss, global_step=global_step)
Args | |
---|---|
dp_sum_query
|
DPQuery object, specifying differential privacy
mechanism to use.
|
num_microbatches
|
Number of microbatches into which each minibatch is
split. If None , will default to the size of the minibatch, and
per-example gradients will be computed.
|
unroll_microbatches
|
If true, processes microbatches within a Python
loop instead of a tf.while_loop . Can be used if using a
tf.while_loop raises an exception.
|
while_loop_parallel_iterations
|
The number of iterations allowed to run in parallel. It must be a positive integer. Applicable only when unroll_microbatches is set to False. It gives users some control over memory consumption. |
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Class Variables | |
---|---|
GATE_GRAPH |
2
|
GATE_NONE |
0
|
GATE_OP |
1
|