首站-论文投稿智能助手
典型文献
Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
文献摘要:
Deep learning-based models are vulnerable to adversarial attacks.Defense against adversarial attacks is essential for sensit-ive and safety-critical scenarios.However,deep learning methods still lack effective and efficient defense mechanisms against adversari-al attacks.Most of the existing methods are just stopgaps for specific adversarial samples.The main obstacle is that how adversarial samples fool the deep learning models is still unclear.The underlying working mechanism of adversarial samples has not been well ex-plored,and it is the bottleneck of adversarial attack defense.In this paper,we build a causal model to interpret the generation and per-formance of adversarial samples.The self-attention/transformer is adopted as a powerful tool in this causal model.Compared to exist-ing methods,causality enables us to analyze adversarial samples more naturally and intrinsically.Based on this causal model,the work-ing mechanism of adversarial samples is revealed,and instructive analysis is provided.Then,we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism.The causal insights enable us to detect and re-cognize adversarial samples without any extra model or training.Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods.Our methods outperform the state-of-the-art defense methods under various adversarial attacks.
文献关键词:
作者姓名:
Min Ren;Yun-Long Wang;Zhao-Feng He
作者机构:
University of Chinese Academy of Sciences,Beijing 100190,China;Center for Research on Intelligent Perception and Computing,National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;Laboratory of Visual Computing and Intelligent System,Beijing University of Posts and Telecommunications,Beijing 100876,China
引用格式:
[1]Min Ren;Yun-Long Wang;Zhao-Feng He-.Towards Interpretable Defense Against Adversarial Attacks via Causal Inference)[J].机器智能研究(英文),2022(03):209-226
A类:
adversari,stopgaps,cognize
B类:
Towards,Interpretable,Defense,Against,Adversarial,Attacks,via,Causal,Inference,Deep,learning,models,vulnerable,adversarial,attacks,against,essential,sensit,safety,critical,scenarios,However,deep,methods,still,lack,efficient,defense,mechanisms,Most,existing,just,specific,samples,main,obstacle,that,how,fool,unclear,underlying,working,has,not,been,well,plored,bottleneck,this,paper,build,interpret,generation,formance,self,attention,transformer,adopted,powerful,tool,Compared,causality,enables,analyze,more,naturally,intrinsically,Based,revealed,instructive,analysis,provided,Then,simple,detection,recognition,according,insights,without,any,extra,training,Extensive,experiments,conducted,demonstrate,effectiveness,proposed,Our,outperform,state,art,various
AB值:
0.480154
相似文献
机标中图分类号,由域田数据科技根据网络公开资料自动分析生成,仅供学习研究参考。