Abstract
Artificial Neural Networks, as the name itself suggests, are biologically inspired algorithms designed to simulate the way in which the human brain processes information. Like neurons, which consist of a cell nucleus that receives input from other neurons through a web of input terminals, an Artificial Neural Network includes hundreds of single units, artificial neurons or processing elements, connected with coefficients (weights), and are organized in layers. The power of neural computations comes from connecting neurons in a network: in fact, in an Artificial Neural Network it is possible to manage a different number of information at the same time. What is not fully understood is which is the most efficient way to train an Artificial Neural Network, and in particular what is the best mini-batch size for maximize accuracy while minimizing training time. The idea that will be developed in this study has its roots in the biological world, that inspired the creation of Artificial Neural Network in the first place.
Humans have altered the face of the world through extraordinary adaptive and technological advances: those changes were made possible by our cognitive structure, particularly the ability to reasoning and build causal models of external events. This dynamism is made possible by a high degree of curiosity. In the biological world, and especially in human beings, curiosity arises from the constant search of knowledge and information: behaviours that support the information sampling mechanism range from the very small (initial mini-batch size) to the very elaborate sustained (increasing mini-batch size).
The goal of this project is to train an Artificial Neural Network by increasing dynamically, in an adaptive manner (with validation set), the mini-batch size; our hypothesis is that this training method will be more efficient (in terms of time and costs) compared to the ones implemented so far.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. arXiv:1606.04838
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
Gottlieb, J., Oudeyer, P.-Y., Lopes, M., Baranes, A.: Information seeking, curiosity and attention: computational and neural mechanisms. Trends Cogn. Sci. 17(11), 585–593 (2013). NIH Public Access
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436 (2015)
Lehman, J., Stanley, K.O.: Abandoning objectives: evolution through the search for novelty alone. Evol. Comput. 19, 189–223 (2011)
Smith, S.L., Kindermans, P.-J., Ying, C., Le, Q.V.: Don’t decay the learning rate, increase the batch size. In: ICLR 2018 Conference (2018)
Serra, R., Zanarini, G. (eds.): Complex System and Cognitive Process. Springer, Heidelberg (1990)
Acknowledgements
The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CLASS Project (https://class-project.eu/), grant agreement n 780622.
This work was partially supported also by INdAM-GNCS (Research Projects 2018).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Franchini, G., Burgio, P., Zanni, L. (2020). Artificial Neural Networks: The Missing Link Between Curiosity and Accuracy. In: Abraham, A., Cherukuri, A., Melin, P., Gandhi, N. (eds) Intelligent Systems Design and Applications. ISDA 2018 2018. Advances in Intelligent Systems and Computing, vol 941. Springer, Cham. https://doi.org/10.1007/978-3-030-16660-1_100
Download citation
DOI: https://doi.org/10.1007/978-3-030-16660-1_100
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16659-5
Online ISBN: 978-3-030-16660-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)