Abstract
The Deep Q-Network (DQN) is a method of deep reinforcement learning algorithm. DQN is a deep neural network structure used for the estimation of Q value of the Q-learning technique. The authors have previously developed a simulation system on DQN-based behavioral control methods for actuator nodes in Wireless Sensor Actor Networks (WSANs). In this paper, an Autonomous Aerial Vehicle (AAV) testbed is designed and implemented for DQN-based mobility control. We evaluate the performance of the AAV testbed for a indoor single-path environment. For simulation results show that the DQN can control the AAV.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Oda, T., Obukata, R., Ikeda, M., Barolli, L., Takizawa, M.: Design and implementation of a simulation system based on deep q-network for mobile actor node control in wireless sensor and actor networks. In: Proceedings of The 31-th IEEE International Conference on Advanced Information Networking and Applications Workshops (IEEE WAINA-2017) (2017)
Oda, T., Kulla, E., Cuka, M., Elmazi, D., Ikeda, M., Barolli, L.: Performance evaluation of a deep q-network based simulation system for actor node mobility control in wireless sensor and actor networks considering different distributions of events. In: Proceedings of The 11-th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2017), pp. 36–49 (2017)
Oda, T., Elmazi, D., Cuka, M., Kulla, E., Ikeda, M., Barolli, L.: Performance evaluation of a deep q-network based simulation system for actor node mobility control in wireless sensor and actor networks considering three-dimensional environment. In: Proceedings of The 9-th International Conference on Intelligent Networking and Collaborative Systems (INCoS-2017), pp. 41–52 (2017)
Oda, T., Kulla, E., Katayama, K., Ikeda, M., Barolli, L.: A deep q-network based simulation system for actor node mobility control in WSANs considering three-dimensional environment: a comparison study for normal and uniform distributions. In: Proceedings of The 12-th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2018), pp. 842–852 (2018)
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning, pp. 1–9 (2013). arXiv:1312.5602v1
Lei, T., Ming, L.: A robot exploration strategy based on q-learning network. In: IEEE International Conference on Real-time Computing and Robotics (RCAR-2016), pp. 57–62 (2016)
Riedmiller, M.: Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In: The 16-th European Conference on Machine Learning (ECML-2005), vol. 3720 of the series Lecture Notes in Computer Science, pp. 317–328 (2005)
Lin, L.J.: Reinforcement learning for robots using neural networks. Technical report, DTIC Document (1993)
Lange, S., Riedmiller, M.: Deep auto-encoder neural networks in reinforcement learning. In: The 2010 International Joint Conference on Neural Networks (IJCNN-2010), pp. 1–8 (2010)
Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1–2), 99–134 (1998)
The Rust Programming Language. https://www.rust-lang.org/. Accessed 14 Oct 2019
Takano, K., Oda, T., Kohata, M.: Design of a DSL for converting rust programming language into RTL. In: Proceedings of The 8-th International Conference on Emerging Internet, Data & Web Technologies (EIDWT-2020), pp. 342–350 (2020)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: The 13-th International Conference on Artificial Intelligence and Statistics (AISTATS-2010), pp. 249–256 (2010)
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: The 14-th International Conference on Artificial Intelligence and Statistics (AISTATS-2011), pp. 315–323 (2011)
Acknowledgement
This work was supported by Grant for Promotion of Okayama University of Science (OUS) Research Project (OUS-RP-20-3).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Saito, N., Oda, T., Hirata, A., Hirota, Y., Hirota, M., Katayama, K. (2021). Design and Implementation of a DQN Based AAV. In: Barolli, L., Takizawa, M., Enokido, T., Chen, HC., Matsuo, K. (eds) Advances on Broad-Band Wireless Computing, Communication and Applications. BWCCA 2020. Lecture Notes in Networks and Systems, vol 159. Springer, Cham. https://doi.org/10.1007/978-3-030-61108-8_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-61108-8_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61107-1
Online ISBN: 978-3-030-61108-8
eBook Packages: EngineeringEngineering (R0)