首站-论文投稿智能助手
典型文献
Backdoor Attacks on Image Classification Models in Deep Neural Networks
文献摘要:
Deep neural network(DNN)is applied widely in many applications and achieves state-of-the-art performance.However,DNN lacks transparency and in-terpretability for users in structure.Attackers can use this feature to embed trojan horses in the DNN structure,such as inserting a backdoor into the DNN,so that DNN can learn both the normal main task and additional mali-cious tasks at the same time.Besides,DNN relies on data set for training.Attackers can tamper with training data to interfere with DNN training process,such as attaching a trigger on input data.Because of defects in DNN struc-ture and data,the backdoor attack can be a serious threat to the security of DNN.The DNN attacked by backdoor performs well on benign inputs while it outputs an attack-er-specified label on trigger attached inputs.Backdoor at-tack can be conducted in almost every stage of the ma-chine learning pipeline.Although there are a few re-searches in the backdoor attack on image classification,a systematic review is still rare in this field.This paper is a comprehensive review of backdoor attacks.According to whether attackers have access to the training data,we di-vide various backdoor attacks into two types:poisoning-based attacks and non-poisoning-based attacks.We go through the details of each work in the timeline,discuss-ing its contribution and deficiencies.We propose a de-tailed mathematical backdoor model to summary all kinds of backdoor attacks.In the end,we provide some insights about future studies.
文献关键词:
作者姓名:
ZHANG Quanxin;MA Wencong;WANG Yajie;ZHANG Yaoyuan;SHI Zhiwei;LI Yuanzhang
作者机构:
Beijing Institute of Technology,Beijing 100036,China;China Information Technology Security Evaluation Center,Beijing 100085,China
引用格式:
[1]ZHANG Quanxin;MA Wencong;WANG Yajie;ZHANG Yaoyuan;SHI Zhiwei;LI Yuanzhang-.Backdoor Attacks on Image Classification Models in Deep Neural Networks)[J].电子学报(英文),2022(02):199-212
A类:
Backdoor,terpretability,Attackers,trojan,backdoor,cious,tack
B类:
Attacks,Image,Classification,Models,Deep,Neural,Networks,neural,network,DNN,applied,widely,many,applications,achieves,state,art,performance,However,lacks,transparency,users,structure,can,this,feature,embed,horses,such,inserting,into,that,both,normal,main,additional,mali,tasks,same,Besides,relies,data,set,training,tamper,interfere,process,attaching,trigger,Because,defects,serious,threat,security,attacked,by,performs,well,benign,inputs,while,outputs,specified,label,attached,conducted,almost,every,stage,chine,learning,pipeline,Although,there,few,searches,image,classification,systematic,review,still,rare,field,This,paper,comprehensive,attacks,According,whether,attackers,have,access,various,types,poisoning,We,go,through,details,each,timeline,discuss,its,contribution,deficiencies,propose,tailed,mathematical,model,summary,all,kinds,In,end,provide,some,insights,about,future,studies
AB值:
0.50122
相似文献
Efficient Visual Recognition:A Survey on Recent Advances and Brain-inspired Methodologies
Yang Wu;Ding-Heng Wang;Xiao-Tong Lu;Fan Yang;Man Yao;Wei-Sheng Dong;Jian-Bo Shi;Guo-Qi Li-Applied Research Center Laboratory,Tencent Platform and Content Group,Shenzhen 518057,China;School of Automation Science and Engineering,Faculty of Electronic and Information Engineering,Xi'an Jiaotong University,Xi'an 710049,China;School of Artificial Intelligence,Xidian University,Xi'an 710071,China;Division of Information Science,Nara Institute of Science and Technology,Nara 6300192,Japan;Peng Cheng Laboratory,Shenzhen 518000,China;Department of Computer and Information Science,University of Pennsylvania,Philadelphia PA 19104-6389,USA;Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;University of Chinese Academy of Sciences,Beijing 100190,China
机标中图分类号,由域田数据科技根据网络公开资料自动分析生成,仅供学习研究参考。