View source on GitHub |
Implements DPQuery interface for structured queries.
Inherits From: DPQuery
tf_privacy.NestedQuery(
queries
)
NestedQuery evaluates arbitrary nested structures of queries. Records must be nested structures of tensors that are compatible (in type and arity) with the query structure, but are allowed to have deeper structure within each leaf of the query structure. The entire substructure of each record corresponding to a leaf node of the query structure is routed to the corresponding query.
For example, a nested query with structure "[q1, q2]" is compatible with a record of structure "[t1, (t2, t3)]": t1 would be processed by q1, and (t2, t3) would be processed by q2. On the other hand, "[q1, q2]" is not compatible with "(t1, t2)" (type mismatch), "[t1]" (arity-mismatch) or "[t1, t2, t3]" (arity-mismatch).
It is possible for the same tensor to be consumed by multiple sub-queries, by simply replicating it in the record, for example providing "[t1, t1]" to "[q1, q2]".
NestedQuery is intended to allow privacy mechanisms for groups as described in [McMahan & Andrew, 2018: "A General Approach to Adding Differential Privacy to Iterative Training Procedures" (https://arxiv.org/abs/1812.06210)].
Args | |
---|---|
queries
|
A nested structure of queries. |
Methods
accumulate_preprocessed_record
accumulate_preprocessed_record(
sample_state, preprocessed_record
)
Implements tensorflow_privacy.DPQuery.accumulate_preprocessed_record
.
accumulate_record
accumulate_record(
params, sample_state, record
)
Accumulates a single record into the sample state.
This is a helper method that simply delegates to preprocess_record
and
accumulate_preprocessed_record
for the common case when both of those
functions run on a single device. Typically this will be a simple sum.
Args | |
---|---|
params
|
The parameters for the sample. In standard DP-SGD training, the clipping norm for the sample's microbatch gradients (i.e., a maximum norm magnitude to which each gradient is clipped) |
sample_state
|
The current sample state. In standard DP-SGD training, the accumulated sum of previous clipped microbatch gradients. |
record
|
The record to accumulate. In standard DP-SGD training, the gradient computed for the examples in one microbatch, which may be the gradient for just one example (for size 1 microbatches). |
Returns | |
---|---|
The updated sample state. In standard DP-SGD training, the set of previous microbatch gradients with the addition of the record argument. |
derive_metrics
derive_metrics(
global_state
)
Implements tensorflow_privacy.DPQuery.derive_metrics
.
derive_sample_params
derive_sample_params(
global_state
)
Implements tensorflow_privacy.DPQuery.derive_sample_params
.
get_noised_result
get_noised_result(
sample_state, global_state
)
Implements tensorflow_privacy.DPQuery.get_noised_result
.
initial_global_state
initial_global_state()
Implements tensorflow_privacy.DPQuery.initial_global_state
.
initial_sample_state
initial_sample_state(
template=None
)
Implements tensorflow_privacy.DPQuery.initial_sample_state
.
merge_sample_states
merge_sample_states(
sample_state_1, sample_state_2
)
Implements tensorflow_privacy.DPQuery.merge_sample_states
.
preprocess_record
preprocess_record(
params, record
)
Implements tensorflow_privacy.DPQuery.preprocess_record
.