tf_privacy.v1.DPGradientDescentOptimizer
Stay organized with collections
Save and categorize content based on your preferences.
Differentially private subclass of tf.compat.v1.train.GradientDescentOptimizer
.
tf_privacy.v1.DPGradientDescentOptimizer(
dp_sum_query,
num_microbatches=None,
unroll_microbatches=False,
while_loop_parallel_iterations=10,
*args,
**kwargs
)
You can use this as a differentially private replacement for
tf.compat.v1.train.GradientDescentOptimizer
. Note that you must ensure
that any loss processed by this optimizer comes in vector
form.
This is the fully general form of the optimizer that allows you
to define your own privacy mechanism. If you are planning to use
the standard Gaussian mechanism, it is simpler to use the more
specific DPGradientDescentGaussianOptimizer
class instead.
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
GradientDescentOptimizer
.
Examples:
# Create GaussianSumQuery.
dp_sum_query = gaussian_query.GaussianSumQuery(l2_norm_clip=1.0, stddev=0.5)
# Create optimizer.
opt = DPGradientDescentOptimizer(dp_sum_query, 1, False, <standard arguments>)
When using the optimizer, be sure to pass in the loss as a
rank-one tensor with one entry for each example.
# Compute loss as a tensor. Do not call tf.reduce_mean as you
# would with a standard optimizer.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
train_op = opt.minimize(loss, global_step=global_step)
Args |
dp_sum_query
|
DPQuery object, specifying differential privacy
mechanism to use.
|
num_microbatches
|
Number of microbatches into which each minibatch is
split. If None , will default to the size of the minibatch, and
per-example gradients will be computed.
|
unroll_microbatches
|
If true, processes microbatches within a Python
loop instead of a tf.while_loop . Can be used if using a
tf.while_loop raises an exception.
|
while_loop_parallel_iterations
|
The number of iterations allowed to run
in parallel. It must be a positive integer. Applicable only when
unroll_microbatches is set to False. It gives users some control over
memory consumption.
|
*args
|
These will be passed on to the base class __init__ method.
|
**kwargs
|
These will be passed on to the base class __init__ method.
|
Class Variables |
GATE_GRAPH
|
2
|
GATE_NONE
|
0
|
GATE_OP
|
1
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-02-16 UTC.
[null,null,["Last updated 2024-02-16 UTC."],[],[],null,["# tf_privacy.v1.DPGradientDescentOptimizer\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/privacy/blob/v0.9.0/privacy/privacy/optimizers/dp_optimizer.py#L44-L265) |\n\nDifferentially private subclass of [`tf.compat.v1.train.GradientDescentOptimizer`](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/GradientDescentOptimizer). \n\n tf_privacy.v1.DPGradientDescentOptimizer(\n dp_sum_query,\n num_microbatches=None,\n unroll_microbatches=False,\n while_loop_parallel_iterations=10,\n *args,\n **kwargs\n )\n\nYou can use this as a differentially private replacement for\n[`tf.compat.v1.train.GradientDescentOptimizer`](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/GradientDescentOptimizer). Note that you must ensure\nthat any loss processed by this optimizer comes in vector\nform.\n\nThis is the fully general form of the optimizer that allows you\nto define your own privacy mechanism. If you are planning to use\nthe standard Gaussian mechanism, it is simpler to use the more\nspecific `DPGradientDescentGaussianOptimizer` class instead.\n\nWhen instantiating this optimizer, you need to supply several\nDP-related arguments followed by the standard arguments for\n`GradientDescentOptimizer`.\n\n#### Examples:\n\n # Create GaussianSumQuery.\n dp_sum_query = gaussian_query.GaussianSumQuery(l2_norm_clip=1.0, stddev=0.5)\n\n # Create optimizer.\n opt = DPGradientDescentOptimizer(dp_sum_query, 1, False, \u003cstandard arguments\u003e)\n\nWhen using the optimizer, be sure to pass in the loss as a\nrank-one tensor with one entry for each example. \n\n # Compute loss as a tensor. Do not call tf.reduce_mean as you\n # would with a standard optimizer.\n loss = tf.nn.sparse_softmax_cross_entropy_with_logits(\n labels=labels, logits=logits)\n\n train_op = opt.minimize(loss, global_step=global_step)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `dp_sum_query` | `DPQuery` object, specifying differential privacy mechanism to use. |\n| `num_microbatches` | Number of microbatches into which each minibatch is split. If `None`, will default to the size of the minibatch, and per-example gradients will be computed. |\n| `unroll_microbatches` | If true, processes microbatches within a Python loop instead of a [`tf.while_loop`](https://www.tensorflow.org/api_docs/python/tf/while_loop). Can be used if using a [`tf.while_loop`](https://www.tensorflow.org/api_docs/python/tf/while_loop) raises an exception. |\n| `while_loop_parallel_iterations` | The number of iterations allowed to run in parallel. It must be a positive integer. Applicable only when unroll_microbatches is set to False. It gives users some control over memory consumption. |\n| `*args` | These will be passed on to the base class `__init__` method. |\n| `**kwargs` | These will be passed on to the base class `__init__` method. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|------------|-----|\n| GATE_GRAPH | `2` |\n| GATE_NONE | `0` |\n| GATE_OP | `1` |\n\n\u003cbr /\u003e"]]