iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/S00530-021-00832-3
Radar target recognition based on few-shot learning | Multimedia Systems Skip to main content
Log in

Radar target recognition based on few-shot learning

  • Special Issue Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

With the continuous development of target recognition technology, people pay more and more attention to the cost of sample generation, tag addition and network training. Active learning can choose as few samples as possible to achieve a better recognition effect. In this paper, a small number of the simulation generated radar cross-section time series are selected as the training data, combined with the least confidence and edge sampling, a sample selection method based on few-shot learning is proposed. The effectiveness of the method is verified by the target type recognition test in multi time radar cross-section time series. Using the algorithm in this paper, 10 kinds of trajectory data are selected from all 19 kinds of trajectory data, and the training model is tested, which can achieve similar results with 19 kinds of trajectory data training model. Compared with the random selection method, the accuracy is improved by 4–10% in different time lengths.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Knott, E. F., Schaeffer, J. F., Tulley M T.: Radar cross section[M]. SciTech Publishing (2004)

  2. Iwaszczuk, K., Heiselberg, H., Jepsen, P.U.: Terahertz radar cross section measurements[J]. Optics Express,18(25), 26399-26408 (2010)

  3. Huang, C.W., Lee, K.C.: Application of Ica technique to PCA based radar target recognition. Prog. Electromagn. Res. 105, 157–170 (2010)

    Article  Google Scholar 

  4. Skolnik, M.: Radar Handbook Third Edition ser[J]. Electronics electrical engineering (2008)

  5. Wang, T., Bi, W., Zhao, Y., et al.: Radar target recognition algorithm based on RCS observation sequence—set-valued identification method[J]. Journal of Systems Science and Complexity, 29(3), 573-588 (2016)

  6. J. Yang et al., "No Reference Quality Assessment for Screen Content Images Using Stacked Autoencoders in Pictorial and Textual Regions," in IEEE Transactions on Cybernetics https://doi.org/10.1109/TCYB.2020.3024627

  7. Yang, J., Wang, C., Jiang, B., et al.: Visual perception enabled industry intelligence: state of the art, challenges and prospects. IEEE Trans. Industr. Inf. 17(3), 2204–2219 (2020)

    Article  Google Scholar 

  8. Yang, J., Wen, J., Wang, Y., et al.: Fog-based marine environmental information monitoring toward ocean of things. IEEE Internet Things J. 7(5), 4238–4247 (2019)

    Article  Google Scholar 

  9. Yang, J., Wen, J., Jiang, B., et al.: Blockchain-based sharing and tamper-proof framework of big data networking. IEEE Network 34(4), 62–67 (2020)

    Article  Google Scholar 

  10. Li, Y., Nie, J., Chao, X.: Do we really need deep CNN for plant diseases identification? Comput. Electron. Agric. 178, 105803 (2020)

    Article  Google Scholar 

  11. Li, Y., Chao, X.: ANN-based continual classification in agriculture. Agriculture 10(5), 178 (2020)

    Article  Google Scholar 

  12. Sheng, X., Li, Y., Lian, M., et al.: Influence of coupling interference on arrayed eddy current displacement measurement. Mater. Eval. 74(12), 1675–1683 (2016)

    Google Scholar 

  13. Sehgal, B., Shekhawat, H.S., Jana, S.K.: Automatic target recognition using recurrent neural networks. 2019 International conference on range technology (ICORT). IEEE, (2019)

  14. Bhattacharyya, K., Sarma, K.K.: Automatic target recognition (ATR) system using recurrent neural network (RNN) for pulse radar. Int. J. Comput. Applications 50(23), 33–39 (2012)

    Article  Google Scholar 

  15. Wengrowski, E., Purri, M., Dana, K., et al.: Deep convolutional neural networks as a method to classify rotating objects based on monostatic radar cross section. IET Radar Sonar Navig. 13(7), 1092–1100 (2019)

    Article  Google Scholar 

  16. Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 169, 105240 (2020)

    Article  Google Scholar 

  17. Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 182, 106055 (2021)

    Article  Google Scholar 

  18. Li, Y., Chao, X.: Semi-supervised few-shot learning approach for plant diseases recognition[J]. Plant Methods, 17(1), 1-10 (2021)

  19. Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 1199-1208 (2018)

  20. Dw, A., Yu, C.B., Mo, Y.C., et al.: A hybrid approach with optimization-based and metric-based meta-learner for few-shot learning. Neurocomputing 349, 202–211 (2019)

    Article  Google Scholar 

  21. Zhu, X., Goldberg, A.B.: Introduction to semi-supervised learning[J]. Synthesis lectures on artificial intelligence and machine learning, 3(1), 1-130 (2009)

  22. Pan, S.J., Qiang, Y.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  23. Tomer, G., Lior, W., Tamir, H.: A theoretical framework for deep transfer learning. Inf Inference 2, 008 (2016)

    MathSciNet  MATH  Google Scholar 

  24. Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. Adv. Neural. Inf. Process. Syst. 7(10), 231–238 (1995)

    Google Scholar 

  25. Cohn, D., Atlas, L., Ladner, R.: Improving generalization with active learning[J]. Machine learning, 15(2), 201-221 (1994)

  26. Wang, Q., Wu, W., Qi, Y., et al.: Deep Bayesian active learning for learning to rank: a case study in answer selection. IEEE Transact. Knowl Data Eng. 99, 1–1 (2021)

    Google Scholar 

  27. Shayovitz, S., Feder, M.: Minimax active learning via minimal model capacity. 2019 IEEE 29th international workshop on machine learning for signal processing (MLSP). IEEE, New Jersey (2019)

    Google Scholar 

  28. Wang, Q., Wu, W., Zhao, Y., et al.: Graph active learning for GCN-based zero-shot classification. Neurocomputing 435, 15–25 (2021)

    Article  Google Scholar 

  29. Smailovic, J., Grcar, M., Lavrac, N., et al.: Stream-based active learning for sentiment analysis in the financial domain. Inf Sci 285, 181–203 (2014)

    Article  Google Scholar 

  30. Zliobaite, I., Bifet, A., Pfahringer, B., et al.: Active learning with drifting streaming data. IEEE Transact. Neural Netw. Learn. Syst. 25(1), 27 (2014)

    Article  Google Scholar 

  31. Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers[C]//SIGIR’94. Springer, London, 3-12 (1994)

  32. Chang, H.S., Vembu, S., Mohan, S., et al.: Using error decay prediction to overcome practical issues of deep active learning for named entity recognition. Mach. Learn. 109(4), 1749–1778 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  33. Luo, T., Kramer, K., Goldgof, D.B., et al.: Active learning to recognize multiple types of plankton. J. Mach. Learn. Res. 6(4), 589–613 (2005)

    MathSciNet  MATH  Google Scholar 

  34. Said, N., Ahmad, K., Conci, N., et al.: Active learning for event detection in support of disaster analysis applications[J]. Signal, Image and Video Processing, 1-8 (2021)

  35. Li, X., Guo, Y.: Adaptive active learning for image classification. Computer vision and pattern recognition (CVPR), 2013 IEEE conference on. IEEE, (2013)

  36. Zhou, J., Sun, S.: Gaussian process versus margin sampling active learning. Neurocomputing 167, 122–131 (2015)

    Article  Google Scholar 

  37. Hasan, M., Roy-Chowdhury, A.K.: Context aware active learning of activity recognition models. 2015 IEEE international conference on computer vision (ICCV). IEEE, (2015)

  38. Yun., Juseung., Byungjoo Kim., Junmo Kim.: Weight decay scheduling and knowledge distillation for active learning. European conference on computer vision. Springer, Cham, (2020)

  39. Yang, J., Zhang, Z., Mao, W., et al.: Identification and micro-motion parameter estimation of non-cooperative UAV targets[J]. Physical Communication, 46: 101314 (2021)

  40. Wang, Z., Yan, W., Oates, T.: Time series classification from scratch with deep neural networks: A strong baseline[C]. 2017 International joint conference on neural networks (IJCNN). IEEE, 1578-1585 (2017)

  41. Fawaz, H.I., Forestier, G., Weber, J., et al.: Deep learning for time series classification: a review[J]. Data mining and knowledge discovery, 33(4): 917-963 (2019)

  42. Li, T., Zhang, Y., Wang, T.: SRPM–CNN: a combined model based on slide relative position matrix and CNN for time series classification. Complex Intell. Syst. 7, 1619–1631 (2021)

    Article  Google Scholar 

  43. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 770-778 (2016)

Download references

Funding

Partial financial support was received from National Natural Science Foundation of China (No. 61871283), the Foundation of Pre-Research on Equipment of China (No. 61400010304) and Major Civil-Military Integration Project in Tianjin, China (No. 18ZXJMTG00170).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, methodology, formal analysis and investigation: YY and ZZ; writing—original draft preparation: YY; supervision: YL, WM and CL; resources: WM and CL.

Corresponding author

Correspondence to Yang Li.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Y., Zhang, Z., Mao, W. et al. Radar target recognition based on few-shot learning. Multimedia Systems 29, 2865–2875 (2023). https://doi.org/10.1007/s00530-021-00832-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-021-00832-3

Keywords

Navigation