iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1145/3665348.3665384
Exploring Robustness under New Adversarial Threats: A Comprehensive Analysis of Deep Neural Network Defenses | Proceedings of the 2024 International Conference on Generative Artificial Intelligence and Information Security skip to main content
10.1145/3665348.3665384acmotherconferencesArticle/Chapter ViewAbstractPublication PagesgaiisConference Proceedingsconference-collections
research-article

Exploring Robustness under New Adversarial Threats: A Comprehensive Analysis of Deep Neural Network Defenses

Published: 03 July 2024 Publication History

Abstract

Although various methods have been proposed to improve the robustness of deep neural network models, evaluating these models in a fair and reasonable manner remains a challenge. Existing evaluation methods often consider only limited types of attacks, ignoring the generalization performance against new types of attacks, and fail to cover the most advanced defense models. To address these issues, we propose a unified framework to evaluate model robustness under different types of attacks. This framework integrates non-Lp and Lp attacks, and comprehensively evaluates the robustness of various advanced robust models on the CIFAR-10 and ImageNet subsets. Experimental results show that even the most advanced defense models exhibit vulnerabilities under certain new types of attacks, highlighting the importance of developing more comprehensive robustness benchmarks and providing guidance for the design of future robust deep learning models.

References

[1]
Max Kaufmann, Daniel Kang, Yi Sun, Steven Basart, Xuwang Yin, Mantas Mazeika, Akul Arora, Adam Dziedzic, Franziska Boenisch, Tom Brown, Jacob Steinhardt, and Dan Hendrycks. 2019. Testing Robustness Against Unforeseen Adversaries. arXiv:1908.08016. Retrieved from http://arxiv.org/abs/1908.08016.
[2]
Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. 2021. Towards Out-Of-Distribution Generalization: A Survey. arXiv:2108.13624. Retrieved from http://arxiv.org/abs/2108.13624.
[3]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations (ICLR'14).
[4]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of the 6th International Conference on Learning Representations (ICLR'18).
[5]
Dongxian Wu, Shu-Tao Xia, and Yisen Wang. 2020. Adversarial weight perturbation helps robust generalization. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS'20). Curran Associates Inc., Red Hook, NY, USA, 2958–2969.
[6]
Leslie Rice, Eric Wong, and J. Zico Kolter. 2020. Overfitting in adversarially robust deep learning. In Proceedings of the 37th International Conference on Machine Learning (ICML'20). ACM Inc., New York, NY, 8093–8104.
[7]
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. 2018. Adversarially robust generalization requires more data. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS'18). Curran Associates Inc., Red Hook, NY, USA, 5019–5031.
[8]
Cassidy Laidlaw and Soheil Feizi. 2019. Functional adversarial attacks. In Proceedings of the 33rd International Conference on Neural Information Processing Systems (NeurIPS'19). Curran Associates Inc., Red Hook, NY, USA, 10408–10418.
[9]
Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. 2018. Spatially Transformed Adversarial Examples. In Proceedings of the 6th International Conference on Learning Representations (ICLR'18).
[10]
Xiang Ling, Shouling Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, and Ting Wang. 2019. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model. In Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P'19), 673–690. https://doi.org/10.1109/SP.2019.00023.
[11]
Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, and Patrick McDaniel. 2018. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv:1610.00768. Retrieved from http://arxiv.org/abs/1610.00768.
[12]
Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. 2021. RobustBench: a standardized adversarial robustness benchmark. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021). Curran Associates Inc., Red Hook, NY, USA.
[13]
Shiyu Tang, Ruihao Gong, Yan Wang, Aishan Liu, Jiakai Wang, Xinyun Chen, Fengwei Yu, Xianglong Liu, Dawn Song, Alan Yuille, Philip H. S. Torr, and Dacheng Tao. 2021. RobustART: Benchmarking Robustness on Architecture Design and Training Techniques. arXiv:2109.05211. Retrieved from http://arxiv.org/abs/2109.05211.
[14]
Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, and Shibao Zheng. 2023. A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking. arXiv:2302.14301. Retrieved from http://arxiv.org/abs/2302.14301.
[15]
Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho. 2022. CARBEN: Composite Adversarial Robustness Benchmark. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI'22), 5908–5911. https://doi.org/10.24963/ijcai.2022/851.
[16]
Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, and Prateek Mittal. 2023. MultiRobustBench: benchmarking robustness against multiple attacks. In Proceedings of the 40th International Conference on Machine Learning (ICML'23). ACM Inc., New York, NY, 6760–6785.
[17]
Jonas Rauber, Wieland Brendel, and Matthias Bethge. 2017. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. arXiv:1707.04131. Retrieved from http://arxiv.org/abs/1707.04131.
[18]
Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, and Ben Edwards. 2018. Adversarial Robustness Toolbox v1.0.0. arXiv: 1807.01069. Retrieved from http://arxiv.org/abs/1807.01069.
[19]
Cassidy Laidlaw, Sahil Singla, and Soheil Feizi. 2021. Perceptual Adversarial Robustness: Defense Against Unseen Threat Models. In Proceedings of the 9th International Conference on Learning Representations (ICLR'21).
[20]
Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. 2019. Exploring the Landscape of Spatial Robustness. In Proceedings of the 36th International Conference on Machine Learning (ICML'19). ACM Inc., New York, NY, 1802–1811.
[21]
Danny Karmon, Daniel Zoran, and Yoav Goldberg. 2018. LaVAN: Localized and Visible Adversarial Noise. In Proceedings of the 35th International Conference on Machine Learning (ICML'18). ACM Inc., New York, NY, 2507–2515.
[22]
Sihui Dai, Saeed Mahloujifar, and Prateek Mittal. 2022. Formulating Robustness Against Unforeseen Attacks. In Proceedings of the 36th International Conference on Neural Information Processing Systems (NeurIPS'22). Curran Associates Inc., Red Hook, NY, USA, 8647–8661.
[23]
Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho. 2023. Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR'23), 24658–24667. https://doi.org/10.1109/CVPR52729.2023.02362.
[24]
Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, and Shuicheng Yan. 2023. Better diffusion models further improve adversarial training. In Proceedings of the 40th International Conference on Machine Learning (ICML'23). ACM Inc., New York, NY, 36246–36263.

Index Terms

  1. Exploring Robustness under New Adversarial Threats: A Comprehensive Analysis of Deep Neural Network Defenses

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    GAIIS '24: Proceedings of the 2024 International Conference on Generative Artificial Intelligence and Information Security
    May 2024
    439 pages
    ISBN:9798400709562
    DOI:10.1145/3665348
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 July 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    GAIIS 2024

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 12
      Total Downloads
    • Downloads (Last 12 months)12
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 27 Nov 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media