![]() |
DPQuery
for Gaussian sum queries with adaptive clipping.
Inherits From: SumAggregationDPQuery
, DPQuery
tf_privacy.QuantileAdaptiveClipSumQuery(
initial_l2_norm_clip,
noise_multiplier,
target_unclipped_quantile,
learning_rate,
clipped_count_stddev,
expected_num_records,
geometric_update=True
)
Clipping norm is tuned adaptively to converge to a value such that a specified quantile of updates are clipped, using the algorithm of Andrew et al. ( https://arxiv.org/abs/1905.03871). See the paper for details and suggested hyperparameter settings.
Args | |
---|---|
initial_l2_norm_clip
|
The initial value of clipping norm. |
noise_multiplier
|
The stddev of the noise added to the output will be this times the current value of the clipping norm. |
target_unclipped_quantile
|
The desired quantile of updates which should be unclipped. I.e., a value of 0.8 means a value of l2_norm_clip should be found for which approximately 20% of updates are clipped each round. Andrew et al. recommends that this be set to 0.5 to clip to the median. |
learning_rate
|
The learning rate for the clipping norm adaptation. With geometric updating, a rate of r means that the clipping norm will change by a maximum factor of exp(r) at each round. This maximum is attained when |actual_unclipped_fraction - target_unclipped_quantile| is 1.0. Andrew et al. recommends that this be set to 0.2 for geometric updating. |
clipped_count_stddev
|
The stddev of the noise added to the clipped_count.
Andrew et al. recommends that this be set to expected_num_records / 20
for reasonably fast adaptation and high privacy.
|
expected_num_records
|
The expected number of records per round, used to estimate the clipped count quantile. |
geometric_update
|
If True , use geometric updating of clip (recommended).
|
Methods
accumulate_preprocessed_record
accumulate_preprocessed_record(
sample_state, preprocessed_record
)
Implements tensorflow_privacy.DPQuery.accumulate_preprocessed_record
.
accumulate_record
accumulate_record(
params, sample_state, record
)
Accumulates a single record into the sample state.
This is a helper method that simply delegates to preprocess_record
and
accumulate_preprocessed_record
for the common case when both of those
functions run on a single device. Typically this will be a simple sum.
Args | |
---|---|
params
|
The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped) |
sample_state
|
The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients. |
record
|
The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches). |
Returns | |
---|---|
The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument. |
derive_metrics
derive_metrics(
global_state
)
Returns the current clipping norm as a metric.
derive_sample_params
derive_sample_params(
global_state
)
Implements tensorflow_privacy.DPQuery.derive_sample_params
.
get_noised_result
get_noised_result(
sample_state, global_state
)
Implements tensorflow_privacy.DPQuery.get_noised_result
.
initial_global_state
initial_global_state()
Implements tensorflow_privacy.DPQuery.initial_global_state
.
initial_sample_state
initial_sample_state(
template
)
Implements tensorflow_privacy.DPQuery.initial_sample_state
.
merge_sample_states
merge_sample_states(
sample_state_1, sample_state_2
)
Implements tensorflow_privacy.DPQuery.merge_sample_states
.
preprocess_record
preprocess_record(
params, record
)
Implements tensorflow_privacy.DPQuery.preprocess_record
.