Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study
Abstract
This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed framework explicitly learns a degradation transform for the original video inputs, in order to optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. A notable challenge is that the privacy budget, often defined and measured in task-driven contexts, cannot be reliably indicated using any single model performance, because a strong protection of privacy has to sustain against any possible model that tries to hack privacy information. Such an uncommon situation has motivated us to propose two strategies, i.e., budget model restarting and ensemble, to enhance the generalization of the learned degradation on protecting privacy against unseen hacker models. Novel training strategies, evaluation protocols, and result visualization methods have been designed accordingly. Two experiments on privacy-preserving action recognition, with privacy budgets defined in various ways, manifest the compelling effectiveness of the proposed framework in simultaneously maintaining high target task (action recognition) performance while suppressing the privacy breach risk.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2018
- DOI:
- 10.48550/arXiv.1807.08379
- arXiv:
- arXiv:1807.08379
- Bibcode:
- 2018arXiv180708379W
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition
- E-Print:
- A significant extension of this paper is accepted by TPAMI-20. A conference version of this paper is accepted by ECCV-18. A shorter version of this paper is accepted by ICML-18 PiMLAI workshop