iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11227-019-03082-3
Robust hand pose estimation using visual sensor in IoT environment | The Journal of Supercomputing Skip to main content
Log in

Robust hand pose estimation using visual sensor in IoT environment

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

In Internet of Things (IoT) environments, visual sensors with good performance have been used to create and apply various kinds of image data. Particularly, in the field of human–computer interaction, the image sensor interface using human hands is applicable to sign language recognition, games, object operation in virtual reality, and remote surgery. With the popularization of depth cameras, there has been a new interest in the research conducted in RGB images. Nevertheless, hand pose estimation is hard. Research on hand pose estimation has multiple issues, including high-dimensional degrees of freedom, shape changes, self-occlusion, and real-time condition. To address the issues, this study proposes the random forests-based method of hierarchically estimating hand pose in depth images. In this study, the hierarchical estimation method that individually handles hand palms and fingers with the use of an inverse matrix is utilized to address high-dimensional degrees of freedom, shape changes, and self-occlusion. For real-time execution, random forests using simple characteristics are applied. As shown in the experimental results of this study, the proposed hierarchical estimation method estimates the hand pose in input depth images more robustly and quickly than other existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Teixeira FA, Pereira FMQ, Wong H-C, Nogueira JMS, Oliveira LB (2019) SIoT: securing Internet of Things through distributed systems analysis. Future Gener Comput Syst 92:1172–1186

    Article  Google Scholar 

  2. Zhang X, Yue WT (2019) Transformative value of the Internet of Things and pricing decisions. In: Electronic Commerce Research and Applications, vol 34, Article 100825

  3. Costa KAP, Papa JP, Lisboa CO, Munoz R, Albuquerque VHC (2019) Internet of Things: a survey on machine learning-based intrusion detection approaches. Comput Netw 151:147–157

    Article  Google Scholar 

  4. Kim J-J (2017) Spatio-temporal sensor data processing techniques. J Inf Process Syst 13(5):1259–1276

    Google Scholar 

  5. Bendimerad N, Kechar B (2015) Rotational wireless video sensor networks with obstacle avoidance capability for improving disaster area coverage. J Inf Process Syst 11(4):509–527

    Google Scholar 

  6. Liu L, Liu Y, Wang L, Zomaya A, Hu S (2015) Economical and balanced energy usage in the smart home infrastructure: a tutorial and new results. IEEE Trans Emerg Top Comput 3(4):556–570

    Article  Google Scholar 

  7. Wiederhold BK, Miller IT, Wiederhold MD (2018) Using virtual reality to mobilize health care: mobile virtual reality technology for attenuation of anxiety and pain. IEEE Consum Electron Mag 7(1):106–109

    Article  Google Scholar 

  8. Grubert J, Langlotz T, Zollmann S, Regenbrecht H (2017) Towards pervasive augmented reality: context-awareness in augmented reality. IEEE Trans Vis Comput Graph 23(6):1706–1724

    Article  Google Scholar 

  9. Finogeev AG, Parygin DS, Finogeev AA (2017) The convergence computing model for big sensor data mining and knowledge discovery. Hum-Centric Comput Inf Sci 7(1):1–16

    Article  Google Scholar 

  10. Xiao F, Lu M, Zhao Y, Menasria S, Meng D, Xie S, Li J, Li C (2018) An information-aware visualization for privacy-preserving accelerometer data sharing. Hum-Centric Comput Inf Sci 8(1):1–28

    Article  Google Scholar 

  11. Xu Y, Ding C, Shu X, Gui K, Zhang D (2019) Shared control of a robotic arm using non-invasive brain–computer interface and computer vision guidance. Robot Auton Syst 115:121–129

    Article  Google Scholar 

  12. Hassan MU, Rehmani MH, Chen J (2019) Privacy preservation in blockchain-based IoT systems: integration issues, prospects, challenges, and future research directions. Future Gener Comput Syst 97:512–529

    Article  Google Scholar 

  13. Dinh D-L, Kim JT, Kim T-S (2014) Hand gesture recognition and interface via a depth imaging sensor for smart home appliances. Energy Procedia 62:576–582

    Article  Google Scholar 

  14. Cantillo-Negrete J, Carino-Escobar RI, Carrillo-Mora P, Barraza-Madrigal JA, Arias-Carrion O (2019) Robotic orthosis compared to virtual hand for brain–computer interface feedback. Biocybern Biomed Eng 39(2):263–272

    Article  Google Scholar 

  15. Erazo O, Pino JA (2018) Predicting user performance time for hand gesture interfaces. Int J Ind Ergon 65:122–138

    Article  Google Scholar 

  16. Vuletic T, Duffy A, Hay L, McTeague C, Grealy M (2019) Systematic literature review of hand gestures used in human computer interaction interfaces. Int J Hum Comput Stud 129:74–94

    Article  Google Scholar 

  17. Fan Q, Shen X, Hu Y, Yu C (2019) Simple very deep convolutional network for robust hand pose regression from a single depth image. Pattern Recognit Lett 119:205–213

    Article  Google Scholar 

  18. Fang B, Sun F, Liu H, Liu C (2018) 3D human gesture capturing and recognition by the IMMU-based data glove. Neurocomputing 277:198–207

    Article  Google Scholar 

  19. Chan ATS, Leong HV, Kong SH (2009) Real-time tracking of hand gestures for interactive game design. In: Proceedings of the IEEE International Symposium on Industrial Electronics, Seoul, Korea, pp 98–103

  20. Smedt QD, Wannous H, Vandeborre J-P (2016) Skeleton-based dynamic hand gesture recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, USA

  21. Sridhar S, Mueller F, Oulasvirta A, Theobalt C (2015) Fast and robust hand tracking using detection-guided optimization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, pp 3213–3221

  22. Li R, Liu Z, Tan J (2019) A survey on 3D hand pose estimation: cameras, methods, and datasets. Pattern Recognit 93:251–272

    Article  Google Scholar 

  23. Erol A, Bebis G, Nicolescu M, Boyle RD, Twombly XI (2007) Vision-based hand pose estimation: a review. Comput Vis Image Underst 108(1-2):52–73

    Article  Google Scholar 

  24. Taylor J, Bordeaux L, Cashman T et al (2016) Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences. ACM Trans Graph 35(4):1–12

    Article  Google Scholar 

  25. Remelli E, Tkach A, Tagliasacchi A, Pauly M (2017) Low-dimensionality calibration through local anisotropic scaling for robust hand model personalization. In: Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, pp 2554–2562

  26. Tkach A, Tagliasacchi A, Remelli E, Pauly M, Fitzgibbon A (2017) Online generative model personalization for hand tracking. ACM Trans Graph 36(6):1–11

    Article  Google Scholar 

  27. Roditakis K, Makris A, Argyros A (2017) Generative 3D hand tracking with spatially constrained pose sampling. In: Proceedings of the British Machine Vision Conference, London, England, pp 1–14

  28. Tkach A, Pauly M, Tagliasacchi A (2016) Sphere-meshes for real-time hand modeling and tracking. ACM Trans Graph 35(6):1–11

    Article  Google Scholar 

  29. Fleishman S, Kliger M, Lerner A, Kutliroff G (2015) ICPIK: inverse kinematics based articulated-ICP. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, USA, pp 28–35

  30. Tagliasacchi A, Tkach A, Bouaziz S, Botsch M, Pauly M (2015) Robust articulated-ICP for real-time hand tracking. Comput Graph Forum 34(5):101–114

    Article  Google Scholar 

  31. Neverova N, Wolf C, Nebout F, Taylor GW (2017) Hand pose estimation through semi-supervised and weakly-supervised learning. Comput Vis Image Underst 164:56–67

    Article  Google Scholar 

  32. Ge L, Liang H, Yuan J, Thalmann D (2017) 3D convolutional neural networks for efficient and robust hand pose estimation from single depth images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp 5679–5688

  33. Malik J, Elhayek A, Stricker D (2017) Simultaneous hand pose and skeleton bone-lengths estimation from a single depth image. In: Proceedings of the International Conference on 3D Vision, pp 557–565

  34. Hu Z, Hu Y, Wu B, Liu J, Han D, Kurfess T (2018) Hand pose estimation with multi-scale network. Appl Intell 48(8):2501–2515

    Article  Google Scholar 

  35. Ge L, Liang H, Yuan J, Thalmann D (2019) Real-time 3D hand pose estimation with 3D convolutional neural networks. IEEE Trans Pattern Anal Mach Intell 41(4):956–970

    Article  Google Scholar 

  36. Poier G, Roditakis K, Schulter S, Michel D, Bischof H, Argyros AA (2015) Hybrid one-shot 3D hand pose estimation by exploiting uncertainties. In: Proceedings of the British Machine Vision Conference, pp 1–14

  37. Oberweger M, Wohlhart P, Lepetit V (2015) Hands deep in deep learning for hand pose estimation. In: Proceedings of the Computer Vision Winter Workshop

  38. Sanchez-Riera J, Srinivasan K, Hua KL, Cheng WH, Hossain MA, Alhamid MF (2018) Robust RGB-D hand tracking using deep learning priors. IEEE Trans Circuits Syst Video Technol 28(9):2289–2301

    Article  Google Scholar 

  39. Krejov P, Gilbert A, Bowden R (2017) Guided optimisation through classification and regression for hand pose estimation. Comput Vis Image Underst 155:124–138

    Article  Google Scholar 

  40. Madadi M, Escalera S, Carruesco A, Andujar C, Baro X, Gonzalez J (2017) Occlusion aware hand pose recovery from sequences of depth images. In: Proceedings of the IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp 230–237

  41. Chen T, Ting PW, Wu MY, Fu LC (2018) Learning a deep network with spherical part model for 3D hand pose estimation. Pattern Recognit 80:1–20

    Article  Google Scholar 

  42. Ahmad A, Migniot C, Dipanda A (2019) Hand pose estimation and tracking in real and virtual interaction: a review. Image Vis Comput 89:35–49

    Article  Google Scholar 

  43. Zhang S, Wang H, Gao J-G, Xing C-Q (2019) Frequency domain point cloud registration based on the Fourier transform. J Vis Commun Image Represent 61:170–177

    Article  Google Scholar 

  44. Pujol-Miro A, Casas JR, Ruiz-Hidalgo J (2019) Correspondence matching in unorganized 3D point clouds using convolutional neural networks. Image Vis Comput 83–84:51–60

    Article  Google Scholar 

  45. Mineo C, Pierce SG, Summan R (2019) Novel algorithms for 3D surface point cloud boundary detection and edge reconstruction. J Comput Des Eng 6(1):81–91

    Google Scholar 

  46. Pernek A, Hajder L (2013) Automatic focal length estimation as an eigenvalue problem. Pattern Recognit Lett 34(9):1108–1117

    Article  Google Scholar 

  47. Rasouli S, Rajabi Y, Sarabi H (2013) Microlenses focal length measurement using Z-scan and parallel moire deflectometry. Opt Lasers Eng 51(12):1321–1326

    Article  Google Scholar 

  48. Zhu Z, Wang X, Liu Q, Zhang F (2019) Camera calibration method based on optimal polarization angle. Opt Lasers Eng 112:128–135

    Article  Google Scholar 

  49. Duan F, Wu F, Zhou M, Deng X, Tian Y (2012) Calibrating effective focal length for central catadioptric cameras using one space line. Pattern Recogn Lett 33(5):646–653

    Article  Google Scholar 

  50. Yajai A, Rasmequan S (2017) Adaptive directional bounding box from RGB-D information for improving fall detection. J Vis Commun Image Represent 49:257–273

    Article  Google Scholar 

  51. Sun J, Zhong G, Huang K, Dong J (2018) Banzhaf random forests: cooperative game theory based random forests with consistency. Neural Netw 106:20–29

    Article  MATH  Google Scholar 

  52. Cao H, Bernard S, Sabourin R, Heutte L (2019) Random forest dissimilarity based multi-view learning for radiomics application. Pattern Recognit 88:185–197

    Article  Google Scholar 

  53. Zerbini CB, Carvalho LF, Abrao T, Proenca ML (2019) Wavelet against random forest for anomaly mitigation in software-defined networking. Appl Soft Comput 80:138–153

    Article  Google Scholar 

  54. Abellan J, Mantas CJ, Castellano JG, Moral-Garcia S (2018) Increasing diversity in random forest learning algorithm via imprecise probabilities. Expert Syst Appl 97:228–243

    Article  Google Scholar 

  55. Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado, USA, pp 1297–1304

  56. Bohat VK, Arya KV (2019) A new heuristic for multilevel thresholding of images. Expert Syst Appl 117:176–203

    Article  Google Scholar 

  57. Elaziz MA, Lu S (2019) Many-objectives multilevel thresholding image segmentation using knee evolutionary algorithm. Expert Syst Appl 125:305–316

    Article  Google Scholar 

  58. Wang Y, Luo X, Ding L, Fu S, Wei X (2019) Detection based visual tracking with convolutional neural network. Knowl-Based Syst 175:62–71

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2018-0-01419) supervised by the IITP (Institute for Information & communications Technology Promotion).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gye-Young Kim.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, SH., Jang, SW., Park, JH. et al. Robust hand pose estimation using visual sensor in IoT environment. J Supercomput 76, 5382–5401 (2020). https://doi.org/10.1007/s11227-019-03082-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-019-03082-3

Keywords

Navigation