Cooperative Pruning in Cross-Domain Deep Neural Network Compression
Cooperative Pruning in Cross-Domain Deep Neural Network Compression
Shangyu Chen, Wenya Wang, Sinno Jialin Pan
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2102-2108.
https://doi.org/10.24963/ijcai.2019/291
The advancement of deep models poses great challenges to real-world deployment because of the limited computational ability and storage space on edge devices. To solve this problem, existing works have made progress to compress deep models by pruning or quantization. However, most existing methods rely on a large amount of training data and a pre-trained model in the same domain. When only limited in-domain training data is available, these methods fail to perform well. This prompts the idea of transferring knowledge from a resource-rich source domain to a target domain with limited data to perform model compression.
In this paper, we propose a method to perform cross-domain pruning by cooperatively training in both domains: taking advantage of data and a pre-trained model from the source domain to assist pruning in the target domain. Specifically, source and target pruned models are trained simultaneously and interactively, with source information transferred through the construction of a cooperative pruning mask. Our method significantly improves pruning quality in the target domain, and shed light to model compression in the cross-domain setting.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Deep Learning