![]() |
Builds a learning process for FedAvg with client optimizer scheduling.
tff.learning.algorithms.build_weighted_fed_avg_with_optimizer_schedule(
model_fn: Callable[[], tff.learning.models.VariableModel
],
client_learning_rate_fn: Callable[[int], float],
client_optimizer_fn: Callable[[float], TFFOrKerasOptimizer],
server_optimizer_fn: Union[tff.learning.optimizers.Optimizer
, Callable[[], tf.keras.optimizers.Optimizer]] = fed_avg.DEFAULT_SERVER_OPTIMIZER_FN,
model_distributor: Optional[tff.learning.templates.DistributionProcess
] = None,
model_aggregator: Optional[tff.aggregators.WeightedAggregationFactory
] = None,
metrics_aggregator: Optional[tff.learning.metrics.MetricsAggregatorType
] = None,
use_experimental_simulation_loop: bool = False
) -> tff.learning.templates.LearningProcess
This function creates a LearningProcess
that performs federated averaging on
client models. The iterative process has the following methods inherited from
LearningProcess
:
initialize
: Atff.Computation
with the functional type signature( -> S@SERVER)
, whereS
is aLearningAlgorithmState
representing the initial state of the server.next
: Atff.Computation
with the functional type signature(<S@SERVER, {B*}@CLIENTS> -> <L@SERVER>)
whereS
is atff.learning.templates.LearningAlgorithmState
whose type matches the output ofinitialize
and{B*}@CLIENTS
represents the client datasets. The outputL
contains the updated server state, as well as aggregated metrics at the server, including client training metrics and any other metrics from distribution and aggregation processes.get_model_weights
: Atff.Computation
with type signature(S -> M)
, whereS
is atff.learning.templates.LearningAlgorithmState
whose type matches the output ofinitialize
andnext
, andM
represents the type of the model weights used during training.set_model_weights
: Atff.Computation
with type signature(<S, M> -> S)
, whereS
is atff.learning.templates.LearningAlgorithmState
whose type matches the output ofinitialize
andM
represents the type of the model weights used during training.
Each time the next
method is called, the server model is broadcast to each
client using a broadcast function. For each client, local training is
performed using client_optimizer_fn
. Each client computes the difference
between the client model after training and the initial broadcast model.
These model deltas are then aggregated at the server using a weighted
aggregation function. Clients weighted by the number of examples they see
thoughout local training. The aggregate model delta is applied at the server
using a server optimizer.
The primary purpose of this implementation of FedAvg is that it allows for the
client optimizer to be scheduled across rounds. The process keeps track of how
many iterations of .next
have occurred (starting at 0
), and for each such
round_num
, the clients will use client_optimizer_fn(round_num)
to perform
local optimization. This allows learning rate scheduling (eg. starting with
a large learning rate and decaying it over time) as well as a small learning
rate (eg. switching optimizers as learning progresses).
Args | |
---|---|
model_fn
|
A no-arg function that returns a
tff.learning.models.VariableModel . This method must not capture
TensorFlow tensors or variables and use them. The model must be
constructed entirely from scratch on each invocation, returning the same
pre-constructed model each call will result in an error.
|
client_learning_rate_fn
|
A callable accepting an integer round number and
returning a float to be used as a learning rate for the optimizer. The
client work will call optimizer_fn(learning_rate_fn(round_num)) where
round_num is the integer round number. Note that the round numbers
supplied will start at 0 and increment by one each time .next is
called on the resulting process. Also note that this function must be
serializable by TFF.
|
client_optimizer_fn
|
A callable accepting a float learning rate, and
returning a tff.learning.optimizers.Optimizer or a tf.keras.Optimizer .
|
server_optimizer_fn
|
A tff.learning.optimizers.Optimizer , or a no-arg
callable that returns a tf.keras.Optimizer . By default, this uses
tf.keras.optimizers.SGD with a learning rate of 1.0.
|
model_distributor
|
An optional DistributionProcess that distributes the
model weights on the server to the clients. If set to None , the
distributor is constructed via distributors.build_broadcast_process .
|
model_aggregator
|
An optional tff.aggregators.WeightedAggregationFactory
used to aggregate client updates on the server. If None , this is set to
tff.aggregators.MeanFactory .
|
metrics_aggregator
|
A function that takes in the metric finalizers (i.e.,
tff.learning.models.VariableModel.metric_finalizers() ) and a
tff.types.StructWithPythonType of the unfinalized metrics (i.e., the TFF
type of
tff.learning.models.VariableModel.report_local_unfinalized_metrics() ),
and returns a tff.Computation for aggregating the unfinalized metrics.
If None , this is set to tff.learning.metrics.sum_then_finalize .
|
use_experimental_simulation_loop
|
Controls the reduce loop function for input dataset. An experimental reduce loop is used for simulation. It is currently necessary to set this flag to True for performant GPU simulations. |
Returns | |
---|---|
A LearningProcess .
|