张加乐

个人信息Personal Information

助理研究员

学历:南京航空航天大学

学位:工学博士学位

所在单位:航空学院

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Poisoning attack in federated learning using generative adversarial nets

点击次数:

所属单位:计算机科学与技术学院/人工智能学院/软件学院

发表刊物:Proc. - IEEE Int. Conf. Trust, Secur. Priv. Comput. Commun./IEEE Int. Conf. Big Data Sci. Eng., TrustCom/BigDataSE

摘要:Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks. © 2019 IEEE.

是否译文:

发表时间:2019-08-01

合写作者:Zhang, Jiale,Chen, Junjun,Wu, Di,陈兵,Yu, Shui

通讯作者:张加乐