iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/978-3-030-61108-8_32
Design and Implementation of a DQN Based AAV | SpringerLink
Skip to main content

Design and Implementation of a DQN Based AAV

  • Conference paper
  • First Online:
Advances on Broad-Band Wireless Computing, Communication and Applications (BWCCA 2020)

Abstract

The Deep Q-Network (DQN) is a method of deep reinforcement learning algorithm. DQN is a deep neural network structure used for the estimation of Q value of the Q-learning technique. The authors have previously developed a simulation system on DQN-based behavioral control methods for actuator nodes in Wireless Sensor Actor Networks (WSANs). In this paper, an Autonomous Aerial Vehicle (AAV) testbed is designed and implemented for DQN-based mobility control. We evaluate the performance of the AAV testbed for a indoor single-path environment. For simulation results show that the DQN can control the AAV.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Oda, T., Obukata, R., Ikeda, M., Barolli, L., Takizawa, M.: Design and implementation of a simulation system based on deep q-network for mobile actor node control in wireless sensor and actor networks. In: Proceedings of The 31-th IEEE International Conference on Advanced Information Networking and Applications Workshops (IEEE WAINA-2017) (2017)

    Google Scholar 

  2. Oda, T., Kulla, E., Cuka, M., Elmazi, D., Ikeda, M., Barolli, L.: Performance evaluation of a deep q-network based simulation system for actor node mobility control in wireless sensor and actor networks considering different distributions of events. In: Proceedings of The 11-th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2017), pp. 36–49 (2017)

    Google Scholar 

  3. Oda, T., Elmazi, D., Cuka, M., Kulla, E., Ikeda, M., Barolli, L.: Performance evaluation of a deep q-network based simulation system for actor node mobility control in wireless sensor and actor networks considering three-dimensional environment. In: Proceedings of The 9-th International Conference on Intelligent Networking and Collaborative Systems (INCoS-2017), pp. 41–52 (2017)

    Google Scholar 

  4. Oda, T., Kulla, E., Katayama, K., Ikeda, M., Barolli, L.: A deep q-network based simulation system for actor node mobility control in WSANs considering three-dimensional environment: a comparison study for normal and uniform distributions. In: Proceedings of The 12-th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2018), pp. 842–852 (2018)

    Google Scholar 

  5. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  6. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning, pp. 1–9 (2013). arXiv:1312.5602v1

  7. Lei, T., Ming, L.: A robot exploration strategy based on q-learning network. In: IEEE International Conference on Real-time Computing and Robotics (RCAR-2016), pp. 57–62 (2016)

    Google Scholar 

  8. Riedmiller, M.: Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In: The 16-th European Conference on Machine Learning (ECML-2005), vol. 3720 of the series Lecture Notes in Computer Science, pp. 317–328 (2005)

    Google Scholar 

  9. Lin, L.J.: Reinforcement learning for robots using neural networks. Technical report, DTIC Document (1993)

    Google Scholar 

  10. Lange, S., Riedmiller, M.: Deep auto-encoder neural networks in reinforcement learning. In: The 2010 International Joint Conference on Neural Networks (IJCNN-2010), pp. 1–8 (2010)

    Google Scholar 

  11. Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1–2), 99–134 (1998)

    Article  MathSciNet  Google Scholar 

  12. The Rust Programming Language. https://www.rust-lang.org/. Accessed 14 Oct 2019

  13. Takano, K., Oda, T., Kohata, M.: Design of a DSL for converting rust programming language into RTL. In: Proceedings of The 8-th International Conference on Emerging Internet, Data & Web Technologies (EIDWT-2020), pp. 342–350 (2020)

    Google Scholar 

  14. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: The 13-th International Conference on Artificial Intelligence and Statistics (AISTATS-2010), pp. 249–256 (2010)

    Google Scholar 

  15. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: The 14-th International Conference on Artificial Intelligence and Statistics (AISTATS-2011), pp. 315–323 (2011)

    Google Scholar 

Download references

Acknowledgement

This work was supported by Grant for Promotion of Okayama University of Science (OUS) Research Project (OUS-RP-20-3).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nobuki Saito .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Saito, N., Oda, T., Hirata, A., Hirota, Y., Hirota, M., Katayama, K. (2021). Design and Implementation of a DQN Based AAV. In: Barolli, L., Takizawa, M., Enokido, T., Chen, HC., Matsuo, K. (eds) Advances on Broad-Band Wireless Computing, Communication and Applications. BWCCA 2020. Lecture Notes in Networks and Systems, vol 159. Springer, Cham. https://doi.org/10.1007/978-3-030-61108-8_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61108-8_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61107-1

  • Online ISBN: 978-3-030-61108-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics