Sign in

Backdoors in Federated Learning

Introduction

Federated learning is an attractive framework for the massively distributed training of deep learning models with thousands or even millions of participants [1]. In every round, the central server distributes the current joint model to a random subset of participants. Each of them trains locally and submits an updated model to the server, which averages the updates into the new joint model. Motivating applications include training image classifiers and next-word predictors on users’ smartphones. To take advantage of a wide range of non-i.i.d. training data while ensuring participants’ privacy, federated learning by design has no visibility into participants’ local data and training. However, is it safe from attack?

In this blog, we will first briefly introduce the basics of federated learning and common privacy preserving strategies. After that, we will examine a special kind of attack on the federated learning system that becomes possible by using privacy preserving techniques. Then we provide experiment results on the attack and defense of federated learning system.

Federated Learning and Privacy Preservation

Federated learning is introduced to deal with two challenges of today’s machine learning. First, data exists in the form of isolated islands. Data stored on each person’s device is usually private. Similarly, organizations hold different kinds of data that are not accessible to other companies. Second, collecting data from these isolated islands requires a high level of data privacy and security.

To solve these challenges, federated learning defines different architectures to securely fuse different forms of data together, for example, horizontal federated learning. Horizontal federated learning is designed to merge data in the “size” dimension. The data to be merged has the same kind of features, coming from different users. Since horizontal federated learning is more applicable in most real world scenarios (e.g. merging user’s data from their mobile phones), most of the research on federated learning is done in the horizontal learning setting, including the two papers we are going to introduce.

So what is the definition of “privacy” in federated learning? The two most important security metrics used by federated learning are Secure Multi-party Computation (SMC) and Differential Privacy.

The objective of SMC is to guarantee complete zero knowledge of all parties involved. This means that for each party, whether it is a server or an individual user, it knows nothing about all the other parties, except its own input and output. This is exactly what we want for data privacy, but usually achieving this is not easy.

Differential privacy deals with another concern of privacy — a party that has no direct information of other parties, yet can make inference by studying the distribution of the input. With differential privacy, the input data is typically processed by adding noises, so that sensitive attributes are obscured.

Secure aggregation [2] is a practical security protocol defined in horizontal federated learning that can ensure secure multi-party computation, which requires that the server knows nothing about the original data stored in each individual user. However, it is this very technique that makes backdoor attack possible in federated learning.

How to Backdoor Federated Learning

The next two papers talk about the attack and defense of federated learning systems. In How to Backdoor Federated Learning [3], the objective of their attack is to produce a joint model that achieves high accuracy on both its main task and an attacker-chosen backdoor subtask, and retains high accuracy on the backdoor subtask for multiple rounds after the attack.

This paper discusses a particular type of backdoor, called semantic backdoor. It causes the model to produce an attacker-chosen output without modifying digital inputs. For example, a backdoored image-classification model assigns the label “bird” to all purple car images. Another example could be a word-prediction model that suggests an attacker-chosen word to complete certain sentences.

Before we get into the backdoor technique, let’s first take a look at how a normal federated learning operates. Algorithm 1 describes the details of a local update, which is pretty standard. The user receives current global model from the server, trains it with local data, and upload trained local model to the server.

So how to achieve the backdoor? In the paper they want to substitute the new global model G with a malicious model X. Recall the global update of the federated learning system

where local updates L are aggregated. Assuming that only one of the local updates is under control, then the attacker would like to scale up the weights of this very local update to make it contribute more in the global one. The paper assigns L as

Putting the above L into the aggregation will result in a global model replaced by X. The approximately equal sign is due to the cancellation of other local updates when the global model is near convergence.

Algorithm 2 shows the local update for the attacker controlled model. There are two main differences between this and a normal update shown in algorithm 1. First, for every training batch, the attacker replaces certain amounts of items with those from a malicious dataset. After all local training is done, the attacker will get a malicious model X ready for upload. Second, the attacker scales up the local model by a large factor, as described in the above equation.

Here comes an important implication. Federated learning system using secure aggregation mentioned in the previous section prevents the aggregator from inspecting the models submitted by the participants. There is no way to detect if the aggregation includes a malicious model, nor who submits this model. On the other hand, if the system gives up using secure aggregation, it can adopt anomaly detectors to counter this kind of backdoor attack, for example, a detector that takes the magnitudes of model weights into account. The paper also provides a straightforward solution for such detection. The local model now is scaled up only to a bound S permitted by the anomaly detector. The scaling factor gamma is thus

Experiments

The next paper Can You Really Backdoor Federated Learning [4] provides comprehensive experiments about the attack and defense of federated learning systems. They use the EMNIST dataset, which is a writer-annotated handwritten digit classification dataset collected from 3383 users with roughly 100 images of digits per user. The network architecture is simple: a five-layer CNN with 2 convolution layers, 1 max-pooling layer and 2 dense layers. At each round of training, 30 clients are selected to train the model with their own local data for 5 epochs with batch size 20 and client learning rate 0.1. Server learning rate is 1. The backdoor task is to classify “7”s from multiple selected target clients as “1”s.

The first experiment studied the effect of different attack strategies, i.e. random sampling and fixed frequency. In the random sampling attack, a fraction of the clients are completely compromised, the number of adversaries in each round follows a hypergeometric distribution. Whereas in the fixed frequency attack, a single adversary appears in every f rounds. The frequency is set to be inversely proportional to the total number of attackers in random sampling attack for a fair comparison.

Two different attack strategies

The left graph shows a fixed frequency attack with one adversary every 1 round, corresponding to 113 adversaries randomly sampled in the right graph. Both attack models have similar behaviors, despite fixed frequency attacks, which is less practical, being slightly more effective than random sampling attacks.

The next experiment studied how the fraction of corrupted users affect the attack. We can see that in the absence of any defense, the performance of the adversary largely depends on the fraction of adversaries present. As expected, more adversaries means better backdoor task performance.

Fraction of corrupted users

The last experiment examined the effect of two defense strategies, i.e. norm bound and differential privacy. The norm detector may be a valid defense for current backdoor attacks. A bound of 3 can greatly suppress the attack. Adding Gaussian noise can also help mitigate the attack beyond norm clipping without hurting the overall performance much.

Conclusion

Federated learning introduces a new kind of security setting, which requires new protocols to be designed. Secure aggregation effectively deals with privacy of clients. However, there exists a tradeoff between privacy and robustness of federated learning system. Secure aggregation ensures user privacy, yet makes the system defenseless against backdoor attack. It is still an open question to incorporate anomaly detection into secure aggregation. From a high-level perspective, this might be very difficult to implement because we have to violate a portion of the user’s privacy if we want to detect anomaly information in it.

The effectiveness of backdoor attack depends on many factors, such as fraction of compromised users, frequency and timing of attack, existence and type of anomaly detection, etc. Task difficulty should not be neglected among these. Since the last paper works with an easy main task of digit recognition (EMNIST), where the model can achieve high accuracy with small amounts of uncorrupted data, it might have underestimated the effectiveness of backdoor attack.

Citations

  1. Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1–19.
  2. Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., … & Seth, K. (2017, October). Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175–1191).
  3. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020, June). How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics (pp. 2938–2948). PMLR.
  4. Sun, Z., Kairouz, P., Suresh, A. T., & McMahan, H. B. (2019). Can you really backdoor federated learning?. arXiv preprint arXiv:1911.07963.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store