iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://api.crossref.org/works/10.1145/3468507.3468519
{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T18:22:32Z","timestamp":1732040552844},"reference-count":121,"publisher":"Association for Computing Machinery (ACM)","issue":"1","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["SIGKDD Explor. Newsl."],"published-print":{"date-parts":[[2021,5,26]]},"abstract":"Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions. Techniques to protect models against adversarial input are called adversarial defense methods. Although many approaches have been proposed to study adversarial attacks and defenses in different scenarios, an intriguing and crucial challenge remains that how to really understand model vulnerability? Inspired by the saying that \"if you know yourself and your enemy, you need not fear the battles\", we may tackle the challenge above after interpreting machine learning models to open the black-boxes. The goal of model interpretation, or interpretable machine learning, is to extract human-understandable terms for the working mechanism of models. Recently, some approaches start incorporating interpretation into the exploration of adversarial attacks and defenses. Meanwhile, we also observe that many existing methods of adversarial attacks and defenses, although not explicitly claimed, can be understood from the perspective of interpretation. In this paper, we review recent work on adversarial attacks and defenses, particularly from the perspective of machine learning interpretation. We categorize interpretation into two types, feature-level interpretation, and model-level interpretation. For each type of interpretation, we elaborate on how it could be used for adversarial attacks and defenses. We then briefly illustrate additional correlations between interpretation and adversaries. Finally, we discuss the challenges and future directions for tackling adversary issues with interpretation.<\/jats:p>","DOI":"10.1145\/3468507.3468519","type":"journal-article","created":{"date-parts":[[2021,5,30]],"date-time":"2021-05-30T00:55:35Z","timestamp":1622336135000},"page":"86-99","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["Adversarial Attacks and Defenses"],"prefix":"10.1145","volume":"23","author":[{"given":"Ninghao","family":"Liu","sequence":"first","affiliation":[{"name":"Texas A&M University, College Station, TX, USA"}]},{"given":"Mengnan","family":"Du","sequence":"additional","affiliation":[{"name":"Texas A&M University, College Station, TX, USA"}]},{"given":"Ruocheng","family":"Guo","sequence":"additional","affiliation":[{"name":"Arizona State University, Tempe, AZ, USA"}]},{"given":"Huan","family":"Liu","sequence":"additional","affiliation":[{"name":"Arizona State University, Tempe, AZ, USA"}]},{"given":"Xia","family":"Hu","sequence":"additional","affiliation":[{"name":"Texas A&M University, College Station, TX, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,5,29]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572","author":"Goodfellow Ian J","year":"2014","unstructured":"Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 , 2014 . Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014."},{"key":"e_1_2_1_2_1","volume-title":"Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199","author":"Szegedy Christian","year":"2013","unstructured":"Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013 . Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013."},{"key":"e_1_2_1_3_1","volume-title":"Adversarial machine learning at scale","author":"Kurakin Alexey","year":"2017","unstructured":"Alexey Kurakin , Ian Goodfellow , and Samy Bengio . Adversarial machine learning at scale . 2017 . Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. 2017."},{"key":"e_1_2_1_4_1","volume-title":"Systems and Machine Learning (SysML)","author":"Lei Qi","year":"2019","unstructured":"Qi Lei , Lingfei Wu , Pin-Yu Chen , Alexandros G Dimakis , Inderjit S Dhillon , and Michael Witbrock . Discrete adversarial attacks and submodular optimization with applications to text classification . Systems and Machine Learning (SysML) , 2019 . Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G Dimakis, Inderjit S Dhillon, and Michael Witbrock. Discrete adversarial attacks and submodular optimization with applications to text classification. Systems and Machine Learning (SysML), 2019."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220027"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220078"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3357910"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.5555\/3307423.3307424"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/SIEDS.2017.7937699"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_2_1_11_1","volume-title":"Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot , Patrick McDaniel , and Ian Goodfellow . Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 , 2016 . Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016."},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.17"},{"key":"e_1_2_1_13_1","volume-title":"Generalizable datafree objective for crafting universal adversarial perturbations","author":"Mopuri Konda Reddy","unstructured":"Konda Reddy Mopuri , Aditya Ganeshan , and Venkatesh Babu Radhakrishnan . Generalizable datafree objective for crafting universal adversarial perturbations . IEEE transactions on pattern analysis and machine intelligence. Konda Reddy Mopuri, Aditya Ganeshan, and Venkatesh Babu Radhakrishnan. Generalizable datafree objective for crafting universal adversarial perturbations. IEEE transactions on pattern analysis and machine intelligence."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2019.00012"},{"key":"e_1_2_1_15_1","volume-title":"Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin , Ian Goodfellow , and Samy Bengio . Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 , 2016 . Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016."},{"key":"e_1_2_1_16_1","volume-title":"Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608","author":"Doshi-Velez Finale","year":"2017","unstructured":"Finale Doshi-Velez and Been Kim . Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 , 2017 . Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017."},{"key":"e_1_2_1_17_1","volume-title":"The structure and function of explanations. Trends in cognitive sciences","author":"Lombrozo Tania","year":"2006","unstructured":"Tania Lombrozo . The structure and function of explanations. Trends in cognitive sciences , 2006 . Tania Lombrozo. The structure and function of explanations. Trends in cognitive sciences, 2006."},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1146\/annurev.psych.57.102904.190100"},{"key":"e_1_2_1_19_1","volume-title":"Studies in the logic of explanation. Philosophy of science","author":"Hempel Carl G","year":"1948","unstructured":"Carl G Hempel and Paul Oppenheim . Studies in the logic of explanation. Philosophy of science , 1948 . Carl G Hempel and Paul Oppenheim. Studies in the logic of explanation. Philosophy of science, 1948."},{"key":"e_1_2_1_20_1","volume-title":"Digital Signal Processing","author":"Montavon Gr\u00b4egoire","year":"2018","unstructured":"Gr\u00b4egoire Montavon , Wojciech Samek , and Klaus- Robert M\u00a8uller . Methods for interpreting and understanding deep neural networks . Digital Signal Processing , 2018 . Gr\u00b4egoire Montavon, Wojciech Samek, and Klaus- Robert M\u00a8uller. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 2018."},{"key":"e_1_2_1_21_1","volume-title":"ICLR","author":"Metzen Jan Hendrik","year":"2017","unstructured":"Jan Hendrik Metzen , Tim Genewein , Volker Fischer , and Bastian Bischo?. On detecting adversarial perturbations . ICLR , 2017 . Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischo?. On detecting adversarial perturbations. ICLR, 2017."},{"key":"e_1_2_1_22_1","volume-title":"Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991","author":"Xie Cihang","year":"2017","unstructured":"Cihang Xie , Jianyu Wang , Zhishuai Zhang , Zhou Ren , and Alan Yuille . Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991 , 2017 . Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00191"},{"key":"e_1_2_1_24_1","volume-title":"Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155","author":"Xu Weilin","year":"2017","unstructured":"Weilin Xu , David Evans , and Yanjun Qi . Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 , 2017 . Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017."},{"key":"e_1_2_1_25_1","volume-title":"CVPR","author":"Xie Cihang","year":"2019","unstructured":"Cihang Xie , Yuxin Wu , Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness . In CVPR , 2019 . Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In CVPR, 2019."},{"key":"e_1_2_1_26_1","volume-title":"Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083","author":"Madry Aleksander","year":"2017","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 , 2017 . Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017."},{"key":"e_1_2_1_27_1","volume-title":"2016 IEEE Symposium on Security and Privacy (SP). IEEE.","author":"Papernot Nicolas","unstructured":"Nicolas Papernot , Patrick McDaniel , Xi Wu , Somesh Jha , and Ananthram Swami . Distillation as a defense to adversarial perturbations against deep neural networks . In 2016 IEEE Symposium on Security and Privacy (SP). IEEE. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP). IEEE."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.56"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.5555\/3327757.3327870"},{"key":"e_1_2_1_30_1","volume-title":"Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960","author":"Gong Zhitao","year":"2017","unstructured":"Zhitao Gong , Wenlu Wang , and Wei-Shinn Ku . Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960 , 2017 . Zhitao Gong, Wenlu Wang, and Wei-Shinn Ku. Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960, 2017."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134057"},{"key":"e_1_2_1_32_1","volume-title":"On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280","author":"Grosse Kathrin","year":"2017","unstructured":"Kathrin Grosse , Praveen Manoharan , Nicolas Papernot , Michael Backes , and Patrick McDaniel . On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 , 2017 . Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280, 2017."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313545"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.319"},{"key":"e_1_2_1_35_1","volume-title":"Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034","author":"Simonyan Karen","year":"2013","unstructured":"Karen Simonyan , Andrea Vedaldi , and Andrew Zisserman . Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 , 2013 . Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0048-x"},{"key":"e_1_2_1_37_1","volume-title":"Interpretable machine learning: definitions, methods, and applications. arXiv preprint arXiv:1901.04592","author":"Murdoch W James","year":"2019","unstructured":"W James Murdoch , Chandan Singh , Karl Kumbier , Reza Abbasi-Asl , and Bin Yu . Interpretable machine learning: definitions, methods, and applications. arXiv preprint arXiv:1901.04592 , 2019 . 96 Volume 23, Issue 1 SIGKDD Explorations W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. Interpretable machine learning: definitions, methods, and applications. arXiv preprint arXiv:1901.04592, 2019. 96 Volume 23, Issue 1 SIGKDD Explorations"},{"key":"e_1_2_1_38_1","volume-title":"Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825","author":"Smilkov Daniel","year":"2017","unstructured":"Daniel Smilkov , Nikhil Thorat , Been Kim , Fernanda Vi\u00b4egas , and Martin Wattenberg . Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 , 2017 . Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\u00b4egas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017."},{"key":"e_1_2_1_39_1","volume-title":"Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204","author":"Tram'er Florian","year":"2017","unstructured":"Florian Tram'er , Alexey Kurakin , Nicolas Papernot , Ian Goodfellow , Dan Boneh , and Patrick McDaniel . Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 , 2017 . Florian Tram'er, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3134600.3134606"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.5555\/3305890.3306024"},{"key":"e_1_2_1_42_1","volume-title":"Distilling knowledge from deep networks with applications to healthcare domain. arXiv preprint arXiv:1512.03542","author":"Che Zhengping","year":"2015","unstructured":"Zhengping Che , Sanjay Purushotham , Robinder Khemani , and Yan Liu . Distilling knowledge from deep networks with applications to healthcare domain. arXiv preprint arXiv:1512.03542 , 2015 . Zhengping Che, Sanjay Purushotham, Robinder Khemani, and Yan Liu. Distilling knowledge from deep networks with applications to healthcare domain. arXiv preprint arXiv:1512.03542, 2015."},{"key":"e_1_2_1_43_1","volume-title":"An interpretable classification framework for information extraction from online healthcare forums. Journal of healthcare engineering","author":"Gao Jun","year":"2017","unstructured":"Jun Gao , Ninghao Liu , Mark Lawley , and Xia Hu . An interpretable classification framework for information extraction from online healthcare forums. Journal of healthcare engineering , 2017 . Jun Gao, Ninghao Liu, Mark Lawley, and Xia Hu. An interpretable classification framework for information extraction from online healthcare forums. Journal of healthcare engineering, 2017."},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243792"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"e_1_2_1_47_1","volume-title":"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye , Nicholas Carlini , and David Wagner . Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples . 2018 . Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. 2018."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICESS.2019.8782514"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.5555\/3326943.3326998"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.5555\/3304222.3304371"},{"key":"e_1_2_1_51_1","volume-title":"ICML","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang . Understanding blackbox predictions via influence functions . In ICML , 2017 . Pang Wei Koh and Percy Liang. Understanding blackbox predictions via influence functions. In ICML, 2017."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3358044"},{"key":"e_1_2_1_53_1","volume-title":"ICML","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang , Yaodong Yu , Jiantao Jiao , Eric Xing , Laurent El Ghaoui , and Michael Jordan . Theoretically principled trade-o? between robustness and accuracy . In ICML , 2019 . Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-o? between robustness and accuracy. In ICML, 2019."},{"key":"e_1_2_1_54_1","volume-title":"Deep convolutional networks do not classify based on global object shape. PLoS computational biology","author":"Baker Nicholas","year":"2018","unstructured":"Nicholas Baker , Hongjing Lu , Gennady Erlikhman , and Philip J Kellman . Deep convolutional networks do not classify based on global object shape. PLoS computational biology , 2018 . Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J Kellman. Deep convolutional networks do not classify based on global object shape. PLoS computational biology, 2018."},{"key":"e_1_2_1_55_1","volume-title":"Informative dropout for robust representation learning: A shape-bias perspective","author":"Shi Baifeng","year":"2020","unstructured":"Baifeng Shi , Dinghuai Zhang , Qi Dai , Zhanxing Zhu , Yadong Mu , and JingdongWang. Informative dropout for robust representation learning: A shape-bias perspective . 2020 . Baifeng Shi, Dinghuai Zhang, Qi Dai, Zhanxing Zhu, Yadong Mu, and JingdongWang. Informative dropout for robust representation learning: A shape-bias perspective. 2020."},{"key":"e_1_2_1_56_1","volume-title":"When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. arXiv preprint arXiv:1909.03418","author":"Fidel Gil","year":"2019","unstructured":"Gil Fidel , Ron Bitton , and Asaf Shabtai . When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. arXiv preprint arXiv:1909.03418 , 2019 . Gil Fidel, Ron Bitton, and Asaf Shabtai. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. arXiv preprint arXiv:1909.03418, 2019."},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.371"},{"key":"e_1_2_1_58_1","volume-title":"Jane- Ling Wang, and Michael I Jordan. Ml-loo: Detecting adversarial examples with feature attribution. arXiv preprint arXiv:1906.03499","author":"Yang Puyudi","year":"2019","unstructured":"Puyudi Yang , Jianbo Chen , Cho-Jui Hsieh , Jane- Ling Wang, and Michael I Jordan. Ml-loo: Detecting adversarial examples with feature attribution. arXiv preprint arXiv:1906.03499 , 2019 . Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane- Ling Wang, and Michael I Jordan. Ml-loo: Detecting adversarial examples with feature attribution. arXiv preprint arXiv:1906.03499, 2019."},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/SIPROCESS.2018.8600516"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403044"},{"key":"e_1_2_1_61_1","volume-title":"Towards interpretable deep neural networks by leveraging adversarial examples. arXiv preprint arXiv:1708.05493","author":"Dong Yinpeng","year":"2017","unstructured":"Yinpeng Dong , Hang Su , Jun Zhu , and Fan Bao . Towards interpretable deep neural networks by leveraging adversarial examples. arXiv preprint arXiv:1708.05493 , 2017 . Yinpeng Dong, Hang Su, Jun Zhu, and Fan Bao. Towards interpretable deep neural networks by leveraging adversarial examples. arXiv preprint arXiv:1708.05493, 2017."},{"key":"e_1_2_1_62_1","first-page":"7502","volume-title":"International Conference on Machine Learning","author":"Zhang Tianyuan","year":"2019","unstructured":"Tianyuan Zhang and Zhanxing Zhu . Interpreting adversarially trained convolutional neural networks . In International Conference on Machine Learning , pages 7502 -- 7511 , 2019 . Tianyuan Zhang and Zhanxing Zhu. Interpreting adversarially trained convolutional neural networks. In International Conference on Machine Learning, pages 7502--7511, 2019."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33013681"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00211"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00642"},{"key":"e_1_2_1_66_1","volume-title":"ICML","volume":"23","author":"Kim Been","year":"2018","unstructured":"Been Kim , Martin Wattenberg , Justin Gilmer , Carrie Cai , James Wexler , Fernanda Viegas , Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav) . In ICML , 2018 . SIGKDD Explorations Volume 23 , Issue 1 97 Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In ICML, 2018. SIGKDD Explorations Volume 23, Issue 1 97"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01237-3_8"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220001"},{"key":"e_1_2_1_69_1","first-page":"1","volume-title":"Visualizing higher-layer features of a deep network","author":"Erhan Dumitru","unstructured":"Dumitru Erhan , Yoshua Bengio , Aaron Courville , and Pascal Vincent . Visualizing higher-layer features of a deep network . University of Montreal , page 1 . Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. University of Montreal, page 1."},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.23915\/distill.00007"},{"key":"e_1_2_1_71_1","volume-title":"Adversarial patch. arXiv preprint arXiv:1712.09665","author":"Brown Tom B","year":"2017","unstructured":"Tom B Brown , Dandelion Man\u00b4e , Aurko Roy , Mart\u00b4?n Abadi, and Justin Gilmer . Adversarial patch. arXiv preprint arXiv:1712.09665 , 2017 . Tom B Brown, Dandelion Man\u00b4e, Aurko Roy, Mart\u00b4?n Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017."},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978392"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2013.50"},{"key":"e_1_2_1_74_1","volume-title":"Recent trends in deep learning based natural language processing","author":"Young Tom","year":"2018","unstructured":"Tom Young , Devamanyu Hazarika , Soujanya Poria , and Erik Cambria . Recent trends in deep learning based natural language processing . IEEE Computational intelligenCe magazine, 2018 . Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. Recent trends in deep learning based natural language processing. IEEE Computational intelligenCe magazine, 2018."},{"key":"e_1_2_1_75_1","volume-title":"Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584","author":"Hamilton William L","year":"2017","unstructured":"William L Hamilton , Rex Ying , and Jure Leskovec . Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584 , 2017 . William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017."},{"key":"e_1_2_1_76_1","volume-title":"ICLR","author":"Higgins Irina","year":"2017","unstructured":"Irina Higgins , Loic Matthey , Arka Pal , Christopher Burgess , Xavier Glorot , Matthew Botvinick , Shakir Mohamed , and Alexander Lerchner . beta-vae : Learning basic visual concepts with a constrained variational framework . ICLR , 2017 . Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR, 2017."},{"key":"e_1_2_1_77_1","volume-title":"ACL","author":"Panigrahi Abhishek","year":"2019","unstructured":"Abhishek Panigrahi , Harsha Vardhan Simhadri , and Chiranjib Bhattacharyya . Word2sense : Sparse interpretable word embeddings . In ACL , 2019 . Abhishek Panigrahi, Harsha Vardhan Simhadri, and Chiranjib Bhattacharyya. Word2sense: Sparse interpretable word embeddings. In ACL, 2019."},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330967"},{"key":"e_1_2_1_79_1","volume-title":"Advances in Neural Information Processing Systems","author":"Ma Jianxin","year":"2019","unstructured":"Jianxin Ma , Chang Zhou , Peng Cui , Hongxia Yang , and Wenwu Zhu . Learning disentangled representations for recommendation . In Advances in Neural Information Processing Systems , 2019 . Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, and Wenwu Zhu. Learning disentangled representations for recommendation. In Advances in Neural Information Processing Systems, 2019."},{"key":"e_1_2_1_80_1","volume-title":"NeurIPS","author":"Ghorbani Amirata","year":"2019","unstructured":"Amirata Ghorbani , James Wexler , James Y Zou , and Been Kim . Towards automatic concept-based explanations . In NeurIPS , 2019 . Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. In NeurIPS, 2019."},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380227"},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.5555\/2969033.2969045"},{"key":"e_1_2_1_83_1","volume-title":"Graph attention networks. arXiv preprint arXiv:1710.10903","author":"\u00b4c Petar","year":"2017","unstructured":"Petar Veli?ckovi \u00b4c , Guillem Cucurull , Arantxa Casanova , Adriana Romero , Pietro Lio , and Yoshua Bengio . Graph attention networks. arXiv preprint arXiv:1710.10903 , 2017 . Petar Veli?ckovi\u00b4c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017."},{"key":"e_1_2_1_84_1","volume-title":"NeurIPS","author":"Chen Chaofan","year":"2019","unstructured":"Chaofan Chen , Oscar Li , Daniel Tao , Alina Barnett , Cynthia Rudin , and Jonathan K Su . This looks like that: deep learning for interpretable image recognition . In NeurIPS , 2019 . Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: deep learning for interpretable image recognition. In NeurIPS, 2019."},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313445"},{"key":"e_1_2_1_86_1","volume-title":"Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175","author":"Ilyas Andrew","year":"2019","unstructured":"Andrew Ilyas , Shibani Santurkar , Dimitris Tsipras , Logan Engstrom , Brandon Tran , and Aleksander Madry . Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 , 2019 . Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019."},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00871"},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295230"},{"key":"e_1_2_1_89_1","volume-title":"Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410","author":"Feinman Reuben","year":"2017","unstructured":"Reuben Feinman , Ryan R Curtin , Saurabh Shintre , and Andrew B Gardner . Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410 , 2017 . Reuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017."},{"key":"e_1_2_1_90_1","volume-title":"Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152","author":"Tsipras Dimitris","year":"2018","unstructured":"Dimitris Tsipras , Shibani Santurkar , Logan Engstrom , Alexander Turner , and Aleksander Madry . Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152 , 2018 . Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018."},{"key":"e_1_2_1_91_1","volume-title":"NeurIPS","author":"Santurkar Shibani","year":"2019","unstructured":"Shibani Santurkar , Andrew Ilyas , Dimitris Tsipras , Logan Engstrom , Brandon Tran , and Aleksander Madry . Image synthesis with a single (robust) classifier . In NeurIPS , 2019 . Shibani Santurkar, Andrew Ilyas, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Image synthesis with a single (robust) classifier. In NeurIPS, 2019."},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220001"},{"key":"e_1_2_1_93_1","volume-title":"Feature purification: How adversarial training performs robust deep learning. arXiv preprint arXiv:2005.10190","author":"Allen-Zhu Zeyuan","year":"2020","unstructured":"Zeyuan Allen-Zhu and Yuanzhi Li . Feature purification: How adversarial training performs robust deep learning. arXiv preprint arXiv:2005.10190 , 2020 . Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. arXiv preprint arXiv:2005.10190, 2020."},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397269"},{"key":"e_1_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1145\/3400051.3400058"},{"key":"e_1_2_1_96_1","first-page":"841","article-title":"Counterfactual explanations without opening the black box: Automated decisions and the gdpr","volume":"31","author":"Wachter Sandra","year":"2017","unstructured":"Sandra Wachter , Brent Mittelstadt , and Chris Russell . Counterfactual explanations without opening the black box: Automated decisions and the gdpr . Harv. JL & Tech. , 31 : 841 , 2017 . 98 Volume 23, Issue 1 SIGKDD Explorations Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017. 98 Volume 23, Issue 1 SIGKDD Explorations","journal-title":"Harv. JL & Tech."},{"key":"e_1_2_1_97_1","volume-title":"Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220","author":"Li Jiwei","year":"2016","unstructured":"Jiwei Li , Will Monroe , and Dan Jurafsky . Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220 , 2016 . Jiwei Li, Will Monroe, and Dan Jurafsky. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016."},{"key":"e_1_2_1_98_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295440"},{"key":"e_1_2_1_99_1","volume-title":"Explaining deep learning models using causal inference. arXiv preprint arXiv:1811.04376","author":"Narendra Tanmayee","year":"2018","unstructured":"Tanmayee Narendra , Anush Sankaran , Deepak Vijaykeerthy , and Senthil Mani . Explaining deep learning models using causal inference. arXiv preprint arXiv:1811.04376 , 2018 . Tanmayee Narendra, Anush Sankaran, Deepak Vijaykeerthy, and Senthil Mani. Explaining deep learning models using causal inference. arXiv preprint arXiv:1811.04376, 2018."},{"key":"e_1_2_1_100_1","volume-title":"Causal learning and explanation of deep neural networks via autoencoded activations. arXiv preprint arXiv:1802.00541","author":"Harradon Michael","year":"2018","unstructured":"Michael Harradon , Je? Druce, and Brian Ruttenberg . Causal learning and explanation of deep neural networks via autoencoded activations. arXiv preprint arXiv:1802.00541 , 2018 . Michael Harradon, Je? Druce, and Brian Ruttenberg. Causal learning and explanation of deep neural networks via autoencoded activations. arXiv preprint arXiv:1802.00541, 2018."},{"key":"e_1_2_1_101_1","volume-title":"Are interpretations fairly evaluated? a definition driven pipeline for post-hoc interpretability. arXiv preprint arXiv:2009.07494","author":"Liu Ninghao","year":"2020","unstructured":"Ninghao Liu , Yunsong Meng , Xia Hu , Tie Wang , and Bo Long . Are interpretations fairly evaluated? a definition driven pipeline for post-hoc interpretability. arXiv preprint arXiv:2009.07494 , 2020 . Ninghao Liu, Yunsong Meng, Xia Hu, Tie Wang, and Bo Long. Are interpretations fairly evaluated? a definition driven pipeline for post-hoc interpretability. arXiv preprint arXiv:2009.07494, 2020."},{"key":"e_1_2_1_102_1","doi-asserted-by":"publisher","DOI":"10.1145\/1518701.1519023"},{"key":"e_1_2_1_103_1","volume-title":"How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682","author":"Narayanan Menaka","year":"2018","unstructured":"Menaka Narayanan , Emily Chen , Jeffrey He , Been Kim , Sam Gershman , and Finale Doshi-Velez . How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 , 2018 . Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682, 2018."},{"key":"e_1_2_1_104_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3314119"},{"key":"e_1_2_1_105_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2005.10.008"},{"key":"e_1_2_1_106_1","doi-asserted-by":"publisher","DOI":"10.5555\/3157096.3157352"},{"key":"e_1_2_1_107_1","volume-title":"ICML","author":"Goyal Yash","year":"2019","unstructured":"Yash Goyal , Ziyan Wu , Jan Ernst , Dhruv Batra , Devi Parikh , and Stefan Lee . Counterfactual visual explanations . In ICML , 2019 . Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. Counterfactual visual explanations. In ICML, 2019."},{"key":"e_1_2_1_108_1","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375830"},{"key":"e_1_2_1_109_1","volume-title":"NeurIPS","author":"Dombrowski Ann-Kathrin","year":"2019","unstructured":"Ann-Kathrin Dombrowski , Maximillian Alber , Christopher Anders , Marcel Ackermann , Klaus- Robert M\u00a8uller , and Pan Kessel . Explanations can be manipulated and geometry is to blame . In NeurIPS , 2019 . Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus- Robert M\u00a8uller, and Pan Kessel. Explanations can be manipulated and geometry is to blame. In NeurIPS, 2019."},{"key":"e_1_2_1_110_1","volume-title":"Certifiably robust interpretation in deep learning. arXiv preprint arXiv:1905.12105","author":"Levine Alexander","year":"2019","unstructured":"Alexander Levine , Sahil Singla , and Soheil Feizi . Certifiably robust interpretation in deep learning. arXiv preprint arXiv:1905.12105 , 2019 . Alexander Levine, Sahil Singla, and Soheil Feizi. Certifiably robust interpretation in deep learning. arXiv preprint arXiv:1905.12105, 2019."},{"key":"e_1_2_1_111_1","volume-title":"Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261","author":"Battaglia Peter W","year":"2018","unstructured":"Peter W Battaglia , Jessica B Hamrick , Victor Bapst , Alvaro Sanchez-Gonzalez , Vinicius Zambaldi , Mateusz Malinowski , Andrea Tacchetti , David Raposo , Adam Santoro , Ryan Faulkner , Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 , 2018 . Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018."},{"key":"e_1_2_1_112_1","doi-asserted-by":"publisher","DOI":"10.5555\/3294996.3295142"},{"key":"e_1_2_1_113_1","doi-asserted-by":"publisher","DOI":"10.5555\/3202377"},{"key":"e_1_2_1_114_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1079"},{"key":"e_1_2_1_115_1","volume-title":"ICDM","author":"Zhou Qinghai","year":"2019","unstructured":"Qinghai Zhou , Liangyue Li , Nan Cao , Lei Ying , and Hanghang Tong . Admiring : Adversarial multinetwork mining . In ICDM , 2019 . Qinghai Zhou, Liangyue Li, Nan Cao, Lei Ying, and Hanghang Tong. Admiring: Adversarial multinetwork mining. In ICDM, 2019."},{"key":"e_1_2_1_116_1","volume-title":"Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu , Brendan Dolan-Gavitt , and Siddharth Garg . Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 , 2017 . Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017."},{"key":"e_1_2_1_117_1","volume-title":"Deep leakage from gradients. arXiv preprint arXiv:1906.08935","author":"Zhu Ligeng","year":"2019","unstructured":"Ligeng Zhu , Zhijian Liu , and Song Han . Deep leakage from gradients. arXiv preprint arXiv:1906.08935 , 2019 . Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. arXiv preprint arXiv:1906.08935, 2019."},{"key":"e_1_2_1_118_1","volume-title":"NeurIPS","author":"Barbu Andrei","year":"2019","unstructured":"Andrei Barbu , David Mayo , Julian Alverio , William Luo , Christopher Wang , Dan Gutfreund , Josh Tenenbaum , and Boris Katz . Objectnet : A large-scale biascontrolled dataset for pushing the limits of object recognition models . In NeurIPS , 2019 . Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale biascontrolled dataset for pushing the limits of object recognition models. In NeurIPS, 2019."},{"key":"e_1_2_1_119_1","volume-title":"Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352","author":"Brown Tom B","year":"2018","unstructured":"Tom B Brown , Nicholas Carlini , Chiyuan Zhang , Catherine Olsson , Paul Christiano , and Ian Goodfellow . Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352 , 2018 . Tom B Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul Christiano, and Ian Goodfellow. Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352, 2018."},{"key":"e_1_2_1_120_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2019.00025"},{"key":"e_1_2_1_121_1","volume-title":"Adversarial examples improve image recognition. arXiv preprint arXiv:1911.09665","author":"Xie Cihang","year":"2019","unstructured":"Cihang Xie , Mingxing Tan , Boqing Gong , Jiang Wang , Alan Yuille , and Quoc V Le . Adversarial examples improve image recognition. arXiv preprint arXiv:1911.09665 , 2019 . Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, and Quoc V Le. Adversarial examples improve image recognition. arXiv preprint arXiv:1911.09665, 2019."}],"container-title":["ACM SIGKDD Explorations Newsletter"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3468507.3468519","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T18:47:43Z","timestamp":1672598863000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3468507.3468519"}},"subtitle":["An Interpretation Perspective"],"short-title":[],"issued":{"date-parts":[[2021,5,26]]},"references-count":121,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,5,26]]}},"alternative-id":["10.1145\/3468507.3468519"],"URL":"http:\/\/dx.doi.org\/10.1145\/3468507.3468519","relation":{},"ISSN":["1931-0145","1931-0153"],"issn-type":[{"value":"1931-0145","type":"print"},{"value":"1931-0153","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,26]]},"assertion":[{"value":"2021-05-29","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}