Abstract
With the continuous development of target recognition technology, people pay more and more attention to the cost of sample generation, tag addition and network training. Active learning can choose as few samples as possible to achieve a better recognition effect. In this paper, a small number of the simulation generated radar cross-section time series are selected as the training data, combined with the least confidence and edge sampling, a sample selection method based on few-shot learning is proposed. The effectiveness of the method is verified by the target type recognition test in multi time radar cross-section time series. Using the algorithm in this paper, 10 kinds of trajectory data are selected from all 19 kinds of trajectory data, and the training model is tested, which can achieve similar results with 19 kinds of trajectory data training model. Compared with the random selection method, the accuracy is improved by 4–10% in different time lengths.
Similar content being viewed by others
References
Knott, E. F., Schaeffer, J. F., Tulley M T.: Radar cross section[M]. SciTech Publishing (2004)
Iwaszczuk, K., Heiselberg, H., Jepsen, P.U.: Terahertz radar cross section measurements[J]. Optics Express,18(25), 26399-26408 (2010)
Huang, C.W., Lee, K.C.: Application of Ica technique to PCA based radar target recognition. Prog. Electromagn. Res. 105, 157–170 (2010)
Skolnik, M.: Radar Handbook Third Edition ser[J]. Electronics electrical engineering (2008)
Wang, T., Bi, W., Zhao, Y., et al.: Radar target recognition algorithm based on RCS observation sequence—set-valued identification method[J]. Journal of Systems Science and Complexity, 29(3), 573-588 (2016)
J. Yang et al., "No Reference Quality Assessment for Screen Content Images Using Stacked Autoencoders in Pictorial and Textual Regions," in IEEE Transactions on Cybernetics https://doi.org/10.1109/TCYB.2020.3024627
Yang, J., Wang, C., Jiang, B., et al.: Visual perception enabled industry intelligence: state of the art, challenges and prospects. IEEE Trans. Industr. Inf. 17(3), 2204–2219 (2020)
Yang, J., Wen, J., Wang, Y., et al.: Fog-based marine environmental information monitoring toward ocean of things. IEEE Internet Things J. 7(5), 4238–4247 (2019)
Yang, J., Wen, J., Jiang, B., et al.: Blockchain-based sharing and tamper-proof framework of big data networking. IEEE Network 34(4), 62–67 (2020)
Li, Y., Nie, J., Chao, X.: Do we really need deep CNN for plant diseases identification? Comput. Electron. Agric. 178, 105803 (2020)
Li, Y., Chao, X.: ANN-based continual classification in agriculture. Agriculture 10(5), 178 (2020)
Sheng, X., Li, Y., Lian, M., et al.: Influence of coupling interference on arrayed eddy current displacement measurement. Mater. Eval. 74(12), 1675–1683 (2016)
Sehgal, B., Shekhawat, H.S., Jana, S.K.: Automatic target recognition using recurrent neural networks. 2019 International conference on range technology (ICORT). IEEE, (2019)
Bhattacharyya, K., Sarma, K.K.: Automatic target recognition (ATR) system using recurrent neural network (RNN) for pulse radar. Int. J. Comput. Applications 50(23), 33–39 (2012)
Wengrowski, E., Purri, M., Dana, K., et al.: Deep convolutional neural networks as a method to classify rotating objects based on monostatic radar cross section. IET Radar Sonar Navig. 13(7), 1092–1100 (2019)
Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 169, 105240 (2020)
Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 182, 106055 (2021)
Li, Y., Chao, X.: Semi-supervised few-shot learning approach for plant diseases recognition[J]. Plant Methods, 17(1), 1-10 (2021)
Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 1199-1208 (2018)
Dw, A., Yu, C.B., Mo, Y.C., et al.: A hybrid approach with optimization-based and metric-based meta-learner for few-shot learning. Neurocomputing 349, 202–211 (2019)
Zhu, X., Goldberg, A.B.: Introduction to semi-supervised learning[J]. Synthesis lectures on artificial intelligence and machine learning, 3(1), 1-130 (2009)
Pan, S.J., Qiang, Y.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
Tomer, G., Lior, W., Tamir, H.: A theoretical framework for deep transfer learning. Inf Inference 2, 008 (2016)
Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. Adv. Neural. Inf. Process. Syst. 7(10), 231–238 (1995)
Cohn, D., Atlas, L., Ladner, R.: Improving generalization with active learning[J]. Machine learning, 15(2), 201-221 (1994)
Wang, Q., Wu, W., Qi, Y., et al.: Deep Bayesian active learning for learning to rank: a case study in answer selection. IEEE Transact. Knowl Data Eng. 99, 1–1 (2021)
Shayovitz, S., Feder, M.: Minimax active learning via minimal model capacity. 2019 IEEE 29th international workshop on machine learning for signal processing (MLSP). IEEE, New Jersey (2019)
Wang, Q., Wu, W., Zhao, Y., et al.: Graph active learning for GCN-based zero-shot classification. Neurocomputing 435, 15–25 (2021)
Smailovic, J., Grcar, M., Lavrac, N., et al.: Stream-based active learning for sentiment analysis in the financial domain. Inf Sci 285, 181–203 (2014)
Zliobaite, I., Bifet, A., Pfahringer, B., et al.: Active learning with drifting streaming data. IEEE Transact. Neural Netw. Learn. Syst. 25(1), 27 (2014)
Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers[C]//SIGIR’94. Springer, London, 3-12 (1994)
Chang, H.S., Vembu, S., Mohan, S., et al.: Using error decay prediction to overcome practical issues of deep active learning for named entity recognition. Mach. Learn. 109(4), 1749–1778 (2020)
Luo, T., Kramer, K., Goldgof, D.B., et al.: Active learning to recognize multiple types of plankton. J. Mach. Learn. Res. 6(4), 589–613 (2005)
Said, N., Ahmad, K., Conci, N., et al.: Active learning for event detection in support of disaster analysis applications[J]. Signal, Image and Video Processing, 1-8 (2021)
Li, X., Guo, Y.: Adaptive active learning for image classification. Computer vision and pattern recognition (CVPR), 2013 IEEE conference on. IEEE, (2013)
Zhou, J., Sun, S.: Gaussian process versus margin sampling active learning. Neurocomputing 167, 122–131 (2015)
Hasan, M., Roy-Chowdhury, A.K.: Context aware active learning of activity recognition models. 2015 IEEE international conference on computer vision (ICCV). IEEE, (2015)
Yun., Juseung., Byungjoo Kim., Junmo Kim.: Weight decay scheduling and knowledge distillation for active learning. European conference on computer vision. Springer, Cham, (2020)
Yang, J., Zhang, Z., Mao, W., et al.: Identification and micro-motion parameter estimation of non-cooperative UAV targets[J]. Physical Communication, 46: 101314 (2021)
Wang, Z., Yan, W., Oates, T.: Time series classification from scratch with deep neural networks: A strong baseline[C]. 2017 International joint conference on neural networks (IJCNN). IEEE, 1578-1585 (2017)
Fawaz, H.I., Forestier, G., Weber, J., et al.: Deep learning for time series classification: a review[J]. Data mining and knowledge discovery, 33(4): 917-963 (2019)
Li, T., Zhang, Y., Wang, T.: SRPM–CNN: a combined model based on slide relative position matrix and CNN for time series classification. Complex Intell. Syst. 7, 1619–1631 (2021)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 770-778 (2016)
Funding
Partial financial support was received from National Natural Science Foundation of China (No. 61871283), the Foundation of Pre-Research on Equipment of China (No. 61400010304) and Major Civil-Military Integration Project in Tianjin, China (No. 18ZXJMTG00170).
Author information
Authors and Affiliations
Contributions
Conceptualization, methodology, formal analysis and investigation: YY and ZZ; writing—original draft preparation: YY; supervision: YL, WM and CL; resources: WM and CL.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Yang, Y., Zhang, Z., Mao, W. et al. Radar target recognition based on few-shot learning. Multimedia Systems 29, 2865–2875 (2023). https://doi.org/10.1007/s00530-021-00832-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00530-021-00832-3