Abstract
The emergence of new autonomous driving systems and functions – in particular, systems that base their decisions on the output of machine learning subsystems responsible for environment perception – brings a significant change in the risks to the safety and security of transportation. These kinds of Advanced Driver Assistance Systems are vulnerable to new types of malicious attacks, and their properties are often not well understood. This paper demonstrates the theoretical and practical possibility of deliberate physical adversarial attacks against deep learning perception systems in general, with a focus on safety-critical driver assistance applications such as traffic sign classification in particular. Our newly developed traffic sign stickers are different from other similar methods insofar that they require no special knowledge or precision in their creation and deployment, thus they present a realistic and severe threat to traffic safety and security. In this paper we preemptively point out the dangers and easily exploitable weaknesses that current and future systems are bound to face.
Zusammenfassung
Das Aufkommen neuer autonomer Fahrsysteme und Funktionen – insbesondere von Systemen mit einer Umgebungserkennung auf Basis Maschinellen Lernens – bringt signifikante Veränderungen der Sicherheitsrisiken im Verkehr mit sich. Derartige Fahrerassistenzsysteme sind anfällig für neue Formen böswilliger Angriffe und die Eigenschaften dieser Systeme sind oft noch nicht ausreichend untersucht. Dieser Beitrag zeigt die theoretische und praktische Möglichkeit gezielter physikalischer Angriffe gegen Deep-Learning basierte Erkennungssysteme im Allgemeinen, mit einem Fokus auf sicherheitskritische Anwendungen der Fahrerassistenz wie der Verkehrszeichen-Klassifikation im Besonderen. Unsere neu entwickelten Verkehrszeichenaufkleber unterscheiden sich von anderen ähnlichen Methoden insofern, als dass sie keine besonderen Kenntnisse oder Präzision bei der Erstellung und ihrem Einsatz erfordern. Mit diesen Aufklebern demonstrieren wir eine realistische und ernsthafte Bedrohung für die Verkehrssicherheit. Präventiv weisen wir mit diesem Beitrag auf Gefahren und leicht ausnutzbare Schwachstellen hin, die aktuell und zukünftig zu erwarten sind.
Funding source: European Social Fund
Award Identifier / Grant number: EFOP-3.6.2-16-2017-00002
Funding statement: The project has been supported by the European Union, co-financed by the European Social Fund, EFOP-3.6.2-16-2017-00002. The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the Autonomous Systems National Laboratory Program.
About the authors
Henrietta Lengyel received the B.Sc. degrees in Transportation Engineer and the M.Sc. degrees in Vehicle Engineer. She acts as a PhD student and also participates in researches at the Department of Automotive Technologies, at Budapest University of Technology and Economics, Hungary. Her main interests include transportation safety, traffic signs anomalies and critical situations analysis of highly automated and autonomous vehicles.
Viktor Remeli graduated in 2015 from the Faculty of Information and Communication Technology at the University of Malta. He is currently assistant research fellow at the Department of Automotive Technologies, BUTE where he also conducts his PhD studies. His research focuses on deep learning based environment perception methods and their verification.
Zsolt Szalay received an M.Sc. degree in electrical engineering from the Budapest University of Technology and Economics (BME) in 1995, an M.Sc. degree in business administration from the Corvinus University in 1997, and a Ph.D. degree in mechanical engineering in 2002 also from BME. He is Associate Professor and Head of the Department of Automotive Technologies at the Budapest University of Technology and Economics, Hungary. He also acts as Head of Research and Innovation at ZalaZONE Automotive Proving Ground, the unique Hungarian infrastructure for connected and automated vehicle testing. His research interests include advanced automotive technologies related to the testing and validation of highly automated and autonomous vehicles. He is a committed supporter of young talents from an early age as a Children’s University lecturer and via the BME Automated Drive Lab.
Appendix A Traffic sign class codes
0: ‘SL 20 km/h’,
1: ‘SL 30 km/h’,
2: ‘SL 50 km/h’,
3: ‘SL 60 km/h’,
4: ‘SL 70 km/h’,
5: ‘SL 80 km/h’,
6: ‘End of SL 80 km/h’,
7: ‘SL 100 km/h’,
8: ‘SL 120 km/h’,
9: ‘No passing’,
10: ‘No passing over 3.5 t’,
11: ‘Right-of-way’,
12: ‘Priority road’,
13: ‘Yield’,
14: ‘Stop’,
15: ‘No vehicles’,
16: ‘Prohibited over 3.5 t’,
17: ‘No entry’,
18: ‘General caution’,
19: ‘Dangerous curve left’,
20: ‘Dangerous curve right’,
21: ‘Double curve’,
22: ‘Bumpy road’,
23: ‘Slippery road’,
24: ‘Road narrows (right)’,
25: ‘Road work’,
26: ‘Traffic signals’,
27: ‘Pedestrians’,
28: ‘Children crossing’,
29: ‘Bicycles crossing’,
30: ‘Beware of ice/snow’,
31: ‘Wild animals crossing’,
32: ‘End of all s&p limits’,
33: ‘Turn right ahead’,
34: ‘Turn left ahead’,
35: ‘Ahead only’,
36: ‘Go straight or right’,
37: ‘Go straight or left’,
38: ‘Keep right’,
39: ‘Keep left’,
40: ‘Roundabout mandatory’,
41: ‘End of no passing’,
42: ‘End no passing (3.5 t)’
Appendix B Newly developed adversarial patches
Appendix C Virtually applied patches
Appendix D Real traffic sign with printed physical world patches
References
1. Athalye, A., Engstrom, L., Ilyas, A. and Kwok, K. (2017). Synthesizing Robust Adversarial Examples.Search in Google Scholar
2. Eichberger, A., Tomasch, E., Hirschberg, W. and Steffan, H. (2011). Potentials of Active Safety and Driver Assistance Systems. ATZ worldwide eMagazine, 113(7-8):56–63.10.1365/s38311-011-0079-3Search in Google Scholar
3. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T. and Song, D. (2017). Robust Physical-World Attacks on Deep Learning Visual Classification.10.1109/CVPR.2018.00175Search in Google Scholar
4. Ghadi, M. and Török, Á. (2019). A comparative analysis of black spot identification methods and road accident segmentation methods. Accident Analysis and Prevention, 128:1–7.10.1016/j.aap.2019.03.002Search in Google Scholar PubMed
5. Goodfellow, I. J., Shlens, J. and Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples.Search in Google Scholar
6. Hassan, H. M., Abdel-Aty, M. A., Choi, K. and Algadhi, S. A. (2012). Driver behavior and preferences for changeable message signs and variable speed limits in reduced visibility conditions. Journal of Intelligent Transportation Systems: Technology, Planning, and Operations, 16(3):132–146.10.1080/15472450.2012.691842Search in Google Scholar
7. ISO/TC 22/SC 32 (2019). ISO/PAS 21448:2019: Road vehicles – Safety of the intended functionality.Search in Google Scholar
8. Kurakin, A., Goodfellow, I. and Bengio, S. (2017). Adversarial machine learning at scale.Search in Google Scholar
9. Kurakin, A., Goodfellow, I. J. and Bengio, S. (2016). Adversarial examples in the physical world.Search in Google Scholar
10. Lengyel, H., Remeli, V. and Szalay, Z. (2019). Easily Deployed Stickers Could Disrupt Traffic Sign Recognition. Perner’s Contacts, XIX(Special Issue 2):156–163.Search in Google Scholar
11. Li, Q., Wu, W., Lu, L., Li, Z., Ahmad, A. and Jeon, G. (2019). Infrared and visible images fusion by using sparse representation and guided filter. Journal of Intelligent Transportation Systems: Technology, Planning, and Operations, 1–10.10.1080/15472450.2019.1643725Search in Google Scholar
12. Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O. and Frossard, P. (2017). Universal adversarial perturbations. In Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, volume 2017-January, pages 86–94.10.1109/CVPR.2017.17Search in Google Scholar
13. Park, S. and So, J. (2020). On the effectiveness of adversarial training in defending against adversarial example attacks for image classification. Applied Sciences, 10(22).10.3390/app10228079Search in Google Scholar
14. Remeli, V., Morapitiye, S., Rövid, A. and Szalay, Z. (2019). Towards Verifiable Specifications for Neural Networks in Autonomous Driving. In IEEE 19th International Symposium on Computational Intelligence and Informatics.10.1109/CINTI-MACRo49179.2019.9105190Search in Google Scholar
15. Shafahi, A., Najibi, M., Ghiasi, A., Xu, Z., Dickerson, J., Studer, C., Davis, L. S., Taylor, G., and Goldstein, T. (2019). Adversarial training for free!.Search in Google Scholar
16. Sharif, M., Bhagavatula, S., Bauer, L. and Reiter, M. K. (2016). Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security – CCS’16, pages 1528–1540, New York, New York, USA. ACM Press.10.1145/2976749.2978392Search in Google Scholar
17. Sitawarin, C., Bhagoji, A. N., Mosenia, A., Chiang, M. and Mittal, P. (2018). DARTS: Deceiving Autonomous Cars with Toxic Signs.Search in Google Scholar
18. Su, J., Vargas, D. V. and Kouichi, S. (2017). One pixel attack for fooling deep neural networks.Search in Google Scholar
19. Szalay, Z., Nyerges, A., Hamar, Z. and Hesz, M. (2017). Technical specification methodology for an automotive proving ground dedicated to connected and automated vehicles. Periodica Polytechnica Transportation Engineering, 45(3):168–174.10.3311/PPtr.10708Search in Google Scholar
20. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J. and Fergus, R. (2013). Intriguing properties of neural networks.Search in Google Scholar
21. Tettamanti, T., Szalai, M., Vass, S. and Tihanyi, V. (2018). Vehicle-In-the-Loop Test Environment for Autonomous Driving with Microscopic Traffic Simulation. In 2018 IEEE International Conference on Vehicular Electronics and Safety, ICVES 2018. Institute of Electrical and Electronics Engineers Inc.10.1109/ICVES.2018.8519486Search in Google Scholar
22. Tsai, Y. and Wu, J. (2002). Shape- and texture-based 1-D image processing algorithm for real-time stop sign road inventory data collection. ITS Journal: Intelligent Transportation Systems Journal, 7(3-4):213–234.10.1080/714040817Search in Google Scholar
© 2021 Walter de Gruyter GmbH, Berlin/Boston