Affiliation of Author(s):计算机科学与技术学院/人工智能学院/软件学院
Journal:Proc. - IEEE Int. Conf. Trust, Secur. Priv. Comput. Commun./IEEE Int. Conf. Big Data Sci. Eng., TrustCom/BigDataSE
Abstract:Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks. © 2019 IEEE.
Translation or Not:no
Date of Publication:2019-08-01
Co-author:Zhang, Jiale,Chen, Junjun,Wu, Di,cb,Yu, Shui
Correspondence Author:zhangjiale
Research Associate
Education Level:南京航空航天大学
Degree:Doctoral Degree in Engineering
School/Department:College of Aerospace Engineering
Open time:..
The Last Update Time:..