{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "a930wM_fqUNH"
},
"source": [
"##### Copyright 2021 The TensorFlow Federated Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "jaZ560_3qav4"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Jqyshitv2X_4"
},
"source": [
"# Tuning recommended aggregations for learning"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "deKLg3ZAX1VG"
},
"source": [
"\n",
" \n",
" \n",
" \n",
" \n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mZojfDVHVRDl"
},
"source": [
"The `tff.learning` module contains a number of ways to aggregate model udpates with recommended default configuration:\n",
"\n",
"* `tff.learning.robust_aggregator`\n",
"* `tff.learning.dp_aggregator`\n",
"* `tff.learning.compression_aggregator`\n",
"* `tff.learning.secure_aggregator`\n",
"\n",
"In this tutorial, we explain the underlying motivation, how they are implemented, and provide suggestions for how to customize their configuration. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "K6zbM0WNulx4"
},
"source": [
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9l4TQCmxhy2X"
},
"outputs": [],
"source": [
"#@test {\"skip\": true}\n",
"!pip install --quiet --upgrade tensorflow-federated\n",
"!pip install --quiet --upgrade nest-asyncio\n",
"\n",
"import nest_asyncio\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "CSUdFIOsunzK"
},
"outputs": [
{
"data": {
"text/plain": [
"b'Hello, World!'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import math\n",
"import tensorflow_federated as tff\n",
"tff.federated_computation(lambda: 'Hello, World!')()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dDw6X9S66BN_"
},
"source": [
"Aggregation methods are represented by objects that can be passed to `tff.learning.algorithms.build_weighted_fed_avg` (as well as `build_unweighted_fed_avg`) as its `model_aggregator` keyword argument. As such, the aggregators discussed here can be directly used to modify a [previous](federated_learning_for_image_classification.ipynb) [tutorial](federated_learning_for_text_generation.ipynb) on federated learning. \n",
"\n",
"The baseline weighted mean from the [FedAvg](http://proceedings.mlr.press/v54/mcmahan17a/mcmahan17a.pdf) algorithm can be expressed using `tff.aggregators.MeanFactory` as follows:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5cJpB9JZ7-_1"
},
"source": [
"```\n",
"mean = tff.aggregators.MeanFactory()\n",
"iterative_process = tff.learning.algorithms.build_weighted_fed_avg(\n",
" ...,\n",
" model_aggregator=mean)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6pGJ5ohP6iyP"
},
"source": [
"The techniques which can be used to extend the weighted mean covered in this tutorial are:\n",
"\n",
"* Zeroing\n",
"* Clipping\n",
"* Differential Privacy\n",
"* Compression\n",
"* Secure Aggregation\n",
"\n",
"The extension is done using composition, in which the `MeanFactory` wraps an inner factory to which it delegates some part of the aggregation, or is itself wrapped by another aggregation factory. For more detail on the design, see [Implementing custom aggregators](custom_aggregators.ipynb) tutorial.\n",
"\n",
"First, we will explain how to enable and configure these techniques individually, and then show how they can be combined together."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BIlZXTLA2WmA"
},
"source": [
"## Techniques\n",
"\n",
"Before delving into the individual techniques, we first introduce the quantile matching algorithm, which will be useful for configuring the techniques below."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "G8MbIih5-w1U"
},
"source": [
"### Quantile matching\n",
"\n",
"Several of the aggregation techniques below need to use a norm bound that controls some aspect of the aggregation. Such bounds can be provided as a constant, but usually it is better to adapt the bound during the course of training. The recommended way is to use the quantile matching algorithm of [Andrew et al. (2019)](https://arxiv.org/abs/1905.03871), initially proposed for its compatibility with differential privacy but useful more broadly. To estimate the value at a given quantile, you can use `tff.aggregators.PrivateQuantileEstimationProcess`. For example, to adapt to the median of a distribution, you can use:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tacGvJ3yADqy"
},
"outputs": [],
"source": [
"median_estimate = tff.aggregators.PrivateQuantileEstimationProcess.no_noise(\n",
" initial_estimate=1.0, target_quantile=0.5, learning_rate=0.2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bvZiJuqy-yz4"
},
"source": [
"Different techinques which use the quantile estimation algorithm will require different values of the algorithm parameters, as we will see. In general, increasing the `learning_rate` parameter means faster adaptation to the correct quantile, but with a higher variance. The `no_noise` classmethod constructs a quantile matching process that does not add noise for differential privacy."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QadilaFRBod3"
},
"source": [
"### Zeroing\n",
"\n",
"Zeroing refers to replacing unusually large values by zeros. Here, \"unusually large\" could mean larger than a predefined threshold, or large relative to values from previous rounds of the computation. Zeroing can increase system robustness to data corruption on faulty clients.\n",
"\n",
"To compute a mean of values with L-infinity norms larger than `ZEROING_CONSTANT` zeroed-out, we wrap a `tff.aggregators.MeanFactory` with a `tff.aggregators.zeroing_factory` that performs the zeroing:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "K_fO7fdX6sY-"
},
"source": [
"```\n",
"zeroing_mean = tff.aggregators.zeroing_factory(\n",
" zeroing_norm=MY_ZEROING_CONSTANT,\n",
" inner_agg_factory=tff.aggregators.MeanFactory())\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "L3RaYJBjCZSC"
},
"source": [
"Here we wrap a `MeanFactory` with a `zeroing_factory` because we want the (pre-aggregation) effects of the `zeroing_factory` to apply to the values at clients before they are passed to the inner `MeanFactory` for aggregation via averaging.\n",
"\n",
"However, for most applications we recommend adaptive zeroing with the quantile estimator. To do so, we use the quantile matching algorithm as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ME-O5JN2CylU"
},
"outputs": [],
"source": [
"zeroing_norm = tff.aggregators.PrivateQuantileEstimationProcess.no_noise(\n",
" initial_estimate=10.0,\n",
" target_quantile=0.98,\n",
" learning_rate=math.log(10),\n",
" multiplier=2.0,\n",
" increment=1.0)\n",
"zeroing_mean = tff.aggregators.zeroing_factory(\n",
" zeroing_norm=zeroing_norm,\n",
" inner_agg_factory=tff.aggregators.MeanFactory())\n",
"\n",
"# Equivalent to:\n",
"# zeroing_mean = tff.learning.robust_aggregator(clipping=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "C29nBBA9C0w_"
},
"source": [
"The parameters have been chosen so that the process adapts very quickly (relatively large `learning_rate`) to a value somewhat larger than the largest values seen so far. For a quantile estimate `Q`, the threshold used for zeroing will be `Q * multiplier + increment`."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UIZU_v4EDj4G"
},
"source": [
"### Clipping to bound L2 norm\n",
"\n",
"Clipping client updates (projecting onto an L2 ball) can improve robustness to outliers. A `tff.aggregators.clipping_factory` is structured exactly like `tff.aggregators.zeroing_factory` discussed above, and can take either a constant or a `tff.templates.EstimationProcess` as its `clipping_norm` argument. The recommended best practice is to use clipping that adapts moderately quickly to a moderately high norm, as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ihP2k8NwEVw1"
},
"outputs": [],
"source": [
"clipping_norm = tff.aggregators.PrivateQuantileEstimationProcess.no_noise(\n",
" initial_estimate=1.0,\n",
" target_quantile=0.8,\n",
" learning_rate=0.2)\n",
"clipping_mean = tff.aggregators.clipping_factory(\n",
" clipping_norm=clipping_norm,\n",
" inner_agg_factory=tff.aggregators.MeanFactory())\n",
"\n",
"# Equivalent to:\n",
"# clipping_mean = tff.learning.robust_aggregator(zeroing=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8PITEFHAEa5M"
},
"source": [
"In our experience over many problems, the precise value of `target_quantile` does not seem to matter too much so long as learning rates are tuned appropriately. However, setting it very low may require increasing the server learning rate for best performance, relative to not using clipping, which is why we recommend 0.8 by default."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fopWHNX4E5tE"
},
"source": [
"### Differential Privacy\n",
"\n",
"TFF supports differentially private aggregation as well, using adaptive clipping and Gaussian noise. An aggregator to perform differentially private averaging can be constructed as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3rXCyAB3dUB4"
},
"outputs": [],
"source": [
"dp_mean = tff.aggregators.DifferentiallyPrivateFactory.gaussian_adaptive(\n",
" noise_multiplier=0.1, clients_per_round=100)\n",
"\n",
"# Equivalent to:\n",
"# dp_mean = tff.learning.dp_aggregator(\n",
"# noise_multiplier=0.1, clients_per_round=100, zeroing=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "U5vj-YEoduKm"
},
"source": [
"Guidance on how to set the `noise_multiplier` argument can be found in the [TFF DP tutorial](https://www.tensorflow.org/federated/tutorials/federated_learning_with_differential_privacy)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m8og1NDFFPgJ"
},
"source": [
"### Lossy Compression\n",
"\n",
"Compared to lossless compression such as gzip, lossy compression generally results in a much higher compression ratio and can still be combined with lossless compression afterwards. Since less time needs to be spent on client-to-server communication, training rounds complete faster. Due to the inherently randomized nature of learning algorithms, up to some threshold, the inaccuracy from lossy compression does not have negative impact on the overall performance.\n",
"\n",
"The default recommendation is to use simple uniform quantization (see [Suresh et al.](http://proceedings.mlr.press/v70/suresh17a/suresh17a.pdf) for instance), parameterized by two values: the tensor size compression `threshold` and the number of `quantization_bits`. For every tensor `t`, if the number of elements of `t` is less or equal to `threshold`, it is not compressed. If it is larger, the elements of `t` are quantized using randomized rounding to `quantizaton_bits` bits. That is, we apply the operation\n",
"\n",
"`t = round((t - min(t)) / (max(t) - min(t)) * (2**quantizaton_bits - 1)),`\n",
"\n",
"resulting in integer values in the range of `[0, 2**quantizaton_bits-1]`. The quantized values are directly packed into an integer type for transmission, and then the inverse transformation is applied.\n",
"\n",
"We recommend setting `quantizaton_bits` equal to 8 and `threshold` equal to 20000:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B9QbbcorIyk-"
},
"outputs": [],
"source": [
"compressed_mean = tff.aggregators.MeanFactory(\n",
" tff.aggregators.EncodedSumFactory.quantize_above_threshold(\n",
" quantization_bits=8, threshold=20000))\n",
"\n",
"# Equivalent to:\n",
"# compressed_mean = tff.learning.compression_aggregator(zeroing=False, clipping=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VK-OxnAAIxdY"
},
"source": [
"#### Tuning suggestions\n",
"\n",
"Both parameters, `quantization_bits` and `threshold` can be adjusted, and the number of clients participating in each training round can also impact the effectiveness of compression.\n",
"\n",
"**Threshold.** The default value of 20000 is chosen because we have observed that variables with small number of elements, such as biases in common layer types, are much more sensitive to introduced noise. Moreover, there is little to be gained from compressing variables with small number of elements in practice, as their uncompressed size is relatively small to begin with.\n",
"\n",
"In some applications it may make sense to change the choice of threshold. For instance, the biases of the output layer of a classification model may be more sensitive to noise. If you are training a language model with a vocabulary of 20004, you may want to set `threshold` to be 20004.\n",
"\n",
"**Quantization bits.** The default value of 8 for `quantization_bits` should be fine for most users. If 8 is working well and you want to squeeze out a bit more performance, you could try taking it down to 7 or 6. If resources permit doing a small grid search, we would recommend that you identify the value for which training becomes unstable or final model quality starts to degrade, and then increase that value by two. For example, if setting `quantization_bits` to 5 works, but setting it to 4 degrades the model, we would recommend the default to be 6 to be \"on the safe side\".\n",
"\n",
"**Clients per round.** Note that significantly increasing the number of clients per round can enable a smaller value for `quantization_bits` to work well, because the randomized inaccuracy introduced by quantization may be evened out by averaging over more client updates."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gt20Tnx3JWkP"
},
"source": [
"### Secure Aggregation\n",
"\n",
"By Secure Aggregation (SecAgg) we refer to a cryptographic protocol wherein client updates are encrypted in such a way that the server can only decrypt their sum. If the number of clients that report back is insufficient, the server will learn nothing at all -- and in no case will the server be able to inspect individual updates. This is realized using the `tff.federated_secure_sum_bitwidth` operator.\n",
"\n",
"The model updates are floating point values, but SecAgg operates on integers. Therefore we need to clip any large values to some bound before discretization to an integer type. The clipping bound can be either a constant or determined adaptively (the recommended default). The integers are then securely summed, and the sum is mapped back to the floating point domain.\n",
"\n",
"To compute a mean with weighted values summed using SecAgg with `MY_SECAGG_BOUND` as the clipping bound, pass `SecureSumFactory` to `MeanFactory` as:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sMhmzjvr63BC"
},
"source": [
"```\n",
"secure_mean = tff.aggregators.MeanFactory(\n",
" tff.aggregators.SecureSumFactory(MY_SECAGG_BOUND))\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-CH7F0zVOMDb"
},
"source": [
"To do the same while determining bounds adaptively:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pLDZVbyEOO0j"
},
"outputs": [],
"source": [
"secagg_bound = tff.aggregators.PrivateQuantileEstimationProcess.no_noise(\n",
" initial_estimate=50.0,\n",
" target_quantile=0.95,\n",
" learning_rate=1.0,\n",
" multiplier=2.0)\n",
"secure_mean = tff.aggregators.MeanFactory(\n",
" tff.aggregators.SecureSumFactory(secagg_bound))\n",
"\n",
"# Equivalent to:\n",
"# secure_mean = tff.learning.secure_aggregator(zeroing=Fasle, clipping=False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5ETn1nulOT9U"
},
"source": [
"#### Tuning suggestions\n",
"\n",
"The adaptive parameters have been chosen so that the bounds are tight (we won't lose much precision in discretization) but clipping happens rarely.\n",
"\n",
"If tuning the parameters, keep in mind that the SecAgg protocol is summing the weighted model updates, after weighting in the mean. The weights are typically the number of data points processed locally, hence between different tasks, the right bound might depend on this quantity.\n",
"\n",
"We do not recommend using the `increment` keyword argument when creating adaptive `secagg_bound`, as this could result in a large relative precision loss, in the case the actual estimate ends up being small.\n",
"\n",
"The above code snippet will use SecAgg only the weighted values. If SecAgg should be also used for the sum of weights, we recommend the bounds to be set as constants, as in a common training setup, the largest possible weight will be known in advance:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UKSySRSOOyG3"
},
"source": [
"```\n",
"secure_mean = tff.aggregators.MeanFactory(\n",
" value_sum_factory=tff.aggregators.SecureSumFactory(secagg_bound),\n",
" weight_sum_factory=tff.aggregators.SecureSumFactory(\n",
" upper_bound_threshold=MAX_WEIGHT, lower_bound_threshold=0.0))\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j847MnTCJcsK"
},
"source": [
"## Composing techniques\n",
"\n",
"Individual techniques for extending a mean introduced above can be combined together.\n",
"\n",
"We recommend the order in which these techniques are applied at clients to be\n",
"\n",
"1. Zeroing\n",
"1. Clipping\n",
"1. Other techniques\n",
"\n",
"The aggregators in `tff.aggregators` module are composed by wrapping \"inner aggregators\" (whose pre-aggregation effects happen last and post-aggregation effects happen first) inside \"outer aggregators\". For example, to perform zeroing, clipping, and compression (in that order), one would write:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "B6WvTgq1Q4hy"
},
"source": [
"```\n",
"# Compression is innermost because its pre-aggregation effects are last.\n",
"compressed_mean = tff.aggregators.MeanFactory(\n",
" tff.aggregators.EncodedSumFactory.quantize_above_threshold(\n",
" quantization_bits=8, threshold=20000))\n",
"# Compressed mean is inner aggregator to clipping...\n",
"clipped_compressed_mean = tff.aggregators.clipping_factory(\n",
" clipping_norm=MY_CLIPPING_CONSTANT,\n",
" inner_agg_factory=compressed_mean)\n",
"# ...which is inner aggregator to zeroing, since zeroing happens first.\n",
"final_aggregator = tff.aggregators.zeroing_factory(\n",
" zeroing_norm=MY_ZEROING_CONSTANT,\n",
" inner_agg_factory=clipped_compressed_mean)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RQ0EZn_vQ5E0"
},
"source": [
"Note that this structure matches the [default aggregators](https://github.com/tensorflow/federated/blob/11e4f632b38745c9b38cc39fa1fe67771c206e77/tensorflow_federated/python/learning/model_update_aggregator.py) for learning algorithms.\n",
"\n",
"Other compositions are possible, too. We extend this document when we are confident that we can provide default configuration which works in multiple different applications. For implementing new ideas, see [Implementing custom aggregators](custom_aggregators.ipynb) tutorial."
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [
"a930wM_fqUNH"
],
"name": "tuning_recommended_aggregators.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}