Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge


DPQuery for Gaussian sum queries with adaptive clipping.

Inherits From: SumAggregationDPQuery, DPQuery

Clipping norm is tuned adaptively to converge to a value such that a specified quantile of updates are clipped, using the algorithm of Andrew et al. ( See the paper for details and suggested hyperparameter settings.

initial_l2_norm_clip The initial value of clipping norm.
noise_multiplier The stddev of the noise added to the output will be this times the current value of the clipping norm.
target_unclipped_quantile The desired quantile of updates which should be unclipped. I.e., a value of 0.8 means a value of l2_norm_clip should be found for which approximately 20% of updates are clipped each round. Andrew et al. recommends that this be set to 0.5 to clip to the median.
learning_rate The learning rate for the clipping norm adaptation. With geometric updating, a rate of r means that the clipping norm will change by a maximum factor of exp(r) at each round. This maximum is attained when |actual_unclipped_fraction - target_unclipped_quantile| is 1.0. Andrew et al. recommends that this be set to 0.2 for geometric updating.
clipped_count_stddev The stddev of the noise added to the clipped_count. Andrew et al. recommends that this be set to expected_num_records / 20 for reasonably fast adaptation and high privacy.
expected_num_records The expected number of records per round, used to estimate the clipped count quantile.
geometric_update If True, use geometric updating of clip (recommended).



View source

Implements tensorflow_privacy.DPQuery.accumulate_preprocessed_record.


View source

Accumulates a single record into the sample state.

This is a helper method that simply delegates to preprocess_record and accumulate_preprocessed_record for the common case when both of those functions run on a single device. Typically this will be a simple sum.

params The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped)
sample_state The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients.
record The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches).

The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument.


View source

Returns the current clipping norm as a metric.


View source

Implements tensorflow_privacy.DPQuery.derive_sample_params.


View source

Implements tensorflow_privacy.DPQuery.get_noised_result.


View source

Implements tensorflow_privacy.DPQuery.initial_global_state.


View source

Implements tensorflow_privacy.DPQuery.initial_sample_state.


View source

Implements tensorflow_privacy.DPQuery.merge_sample_states.


View source

Implements tensorflow_privacy.DPQuery.preprocess_record.