dblp: Qing Xu
https://dblp.org/pid/93/1908-15.html
dblp person page RSS feedThu, 28 Nov 2024 20:33:44 +0100en-USdaily1released under the CC0 1.0 licensedblp@dagstuhl.de (dblp team)dblp@dagstuhl.de (dblp team)Computers/Computer_Science/Publications/Bibliographieshttp://www.rssboard.org/rss-specificationhttps://dblp.org/img/logo.144x51.pngdblp: Qing Xuhttps://dblp.org/pid/93/1908-15.html14451Reinforced Knowledge Distillation for Time Series Regression.https://doi.org/10.1109/TAI.2023.3341854Qing Xu, Keyu Wu, Min Wu, Kezhi Mao, Xiaoli Li, Zhenghua Chen: Reinforced Knowledge Distillation for Time Series Regression.IEEE Trans. Artif. Intell.5(6): 3184-3194 (2024)]]>https://dblp.org/rec/journals/tai/XuWWMLC24Sat, 01 Jun 2024 01:00:00 +0200Improve Knowledge Distillation via Label Revision and Data Selection.https://doi.org/10.48550/arXiv.2404.03693Weichao Lan, Yiu-ming Cheung, Qing Xu, Buhua Liu, Zhikai Hu, Mengke Li, Zhenghua Chen: Improve Knowledge Distillation via Label Revision and Data Selection.CoRRabs/2404.03693 (2024)]]>https://dblp.org/rec/journals/corr/abs-2404-03693Mon, 01 Jan 2024 00:00:00 +0100From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks.https://doi.org/10.48550/arXiv.2405.06038Xue Geng, Zhe Wang, Chunyun Chen, Qing Xu, Kaixin Xu, Chao Jin, Manas Gupta, Xulei Yang, Zhenghua Chen, Mohamed M. Sabry Aly, Jie Lin, Min Wu, Xiaoli Li: From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks.CoRRabs/2405.06038 (2024)]]>https://dblp.org/rec/journals/corr/abs-2405-06038Mon, 01 Jan 2024 00:00:00 +0100LLM-based Knowledge Pruning for Time Series Data Analytics on Edge-computing Devices.https://doi.org/10.48550/arXiv.2406.08765Ruibing Jin, Qing Xu, Min Wu, Yuecong Xu, Dan Li, Xiaoli Li, Zhenghua Chen: LLM-based Knowledge Pruning for Time Series Data Analytics on Edge-computing Devices.CoRRabs/2406.08765 (2024)]]>https://dblp.org/rec/journals/corr/abs-2406-08765Mon, 01 Jan 2024 00:00:00 +0100A Hybrid Ensemble Deep Learning Approach for Early Prediction of Battery Remaining Useful Life.https://doi.org/10.1109/JAS.2023.123024Qing Xu, Min Wu, Edwin Khoo, Zhenghua Chen, Xiaoli Li: A Hybrid Ensemble Deep Learning Approach for Early Prediction of Battery Remaining Useful Life.IEEE CAA J. Autom. Sinica10(1): 177-187 (2023)]]>https://dblp.org/rec/journals/ieeejas/XuWKCL23Sun, 01 Jan 2023 00:00:00 +0100Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data.https://doi.org/10.24963/ijcai.2023/496Qing Xu, Min Wu, Xiaoli Li, Kezhi Mao, Zhenghua Chen: Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data.IJCAI2023: 4460-4468]]>https://dblp.org/rec/conf/ijcai/001500MC23Sun, 01 Jan 2023 00:00:00 +0100Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data.https://doi.org/10.48550/arXiv.2307.03347Qing Xu, Min Wu, Xiaoli Li, Kezhi Mao, Zhenghua Chen: Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data.CoRRabs/2307.03347 (2023)]]>https://dblp.org/rec/journals/corr/abs-2307-03347Sun, 01 Jan 2023 00:00:00 +0100Contrastive adversarial knowledge distillation for deep model compression in time-series regression tasks.https://doi.org/10.1016/j.neucom.2021.04.139Qing Xu, Zhenghua Chen, Mohamed Ragab, Chao Wang, Min Wu, Xiaoli Li: Contrastive adversarial knowledge distillation for deep model compression in time-series regression tasks.Neurocomputing485: 242-251 (2022)]]>https://dblp.org/rec/journals/ijon/XuCRWWL22Sat, 01 Jan 2022 00:00:00 +0100KDnet-RUL: A Knowledge Distillation Framework to Compress Deep Neural Networks for Machine Remaining Useful Life Prediction.https://doi.org/10.1109/TIE.2021.3057030Qing Xu, Zhenghua Chen, Keyu Wu, Chao Wang, Min Wu, Xiaoli Li: KDnet-RUL: A Knowledge Distillation Framework to Compress Deep Neural Networks for Machine Remaining Useful Life Prediction.IEEE Trans. Ind. Electron.69(2): 2022-2032 (2022)]]>https://dblp.org/rec/journals/tie/XuCWWWL22Sat, 01 Jan 2022 00:00:00 +0100