iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11063-007-9053-x
An Improved Particle Swarm Optimization for Evolving Feedforward Artificial Neural Networks | Neural Processing Letters Skip to main content
Log in

An Improved Particle Swarm Optimization for Evolving Feedforward Artificial Neural Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper presents a new evolutionary artificial neural network (ANN) algorithm named IPSONet that is based on an improved particle swarm optimization (PSO). The improved PSO employs parameter automation strategy, velocity resetting, and crossover and mutations to significantly improve the performance of the original PSO algorithm in global search and fine-tuning of the solutions. IPSONet uses the improved PSO to address the design problem of feedforward ANN. Unlike most previous studies on only using PSO to evolve weights of ANNs, this study puts its emphasis on using the improved PSO to evolve simultaneously structure and weights of ANNs by a specific individual representation and evolutionary scheme. The performance of IPSONet has been evaluated on several benchmarks. The results demonstrate that IPSONet can produce compact ANNs with good generalization ability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Hush DR and Horne NG (1993). Progress in supervised neural networks. IEEE Signal Process Mag 10: 8–39

    Article  Google Scholar 

  2. Angeline PJ, Saunders GM and Pollack JB (1994). An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans Neural Netw 5(1): 54–65

    Article  Google Scholar 

  3. Maniezzo V (1994). Genetic evolution of the topology and weight distribution of neural networks. IEEE Trans Neural Netw 5(1): 39–53

    Article  Google Scholar 

  4. Yao X and Liu Y (1997). A new evolutionary system for evolving artificial neural networks. IEEE Trans Neural Netw 8(3): 694–713

    Article  MathSciNet  Google Scholar 

  5. Yao X (1999). Evolving artificial neural networks. Proc IEEE 87(9): 1423–1447

    Article  Google Scholar 

  6. Schindler KH and Fischer MM (2000). An incremental algorithm for parallel training of the size and the weights in a feedforward neural network. Neural Process Lett 11: 131–138

    Article  MATH  Google Scholar 

  7. Castillo PA, Carpio J, Merelo JJ, Prieto A and Rivas V (2000). Evolving multilayer perceptrons. Neural Process Lett 12: 115–127

    Article  MATH  Google Scholar 

  8. Yang JM and Kao CY (2001). A robust evolutionary algorithm for training neural networks. Neural Comput Appl 10: 214–230

    Article  MATH  Google Scholar 

  9. Kennedy J, Eberhart RC (1995) Particle swarm optimization. IEEE IntConf. Neural Networks. Piscataway, pp 1942–1948

  10. Salerno J (1997) Using the particle swarm optimization technique to train a recurrent neural model. 9th International Conference on Tools With Artificial Intelligence (ICTAI97). IEEE Press, pp 45–49

  11. Juang CF (2004). A hybrid genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans Syst Man Cybern 32: 997–1006

    Google Scholar 

  12. Da Y and Ge XR (2005). An improved PSO–based ANN with simulated annealing technique. Neurocomput Lett 63: 527–533

    Article  Google Scholar 

  13. Settles M, Rodebaugh B, Soule T (2003) Comparison of genetic algorithm and particle swarm optimizer when evolving a recurrent neural network. In: Cantú–Paz E et al. (eds), Genetic and Evolutionary Computation————GECCO–2003. Chicago, 12–16 July, Springer–Verlag. 2723 pp 148–149

  14. Ratnaweera A, Saman K and Watson HC (2004). Self–organizing hierarchical particle swarm optimizer with time–varing acceleration coefficients. IEEE Trans Evol Comput 8(3): 240–255

    Article  Google Scholar 

  15. Angeline PJ (1998) Evolutionary optimization verses particle swarm optimization: philosophy and performance difference. In: Lecture notes Computer Science, vol. 1447, Proc. 7th Int. Conf. Evolutionary Programming–Evolutionary Programming VII, Mar. 1998, pp 600–610

  16. Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: Proc. IEEE Int. Conf. Evolutionary Computation. pp 69–73

  17. Shi Y, Eberhart RC (1999) Empirical study of particle swarm Opimization. In: Proc. IEEE Int. Conf. Evolutionary Computation 3 pp 101–106

  18. Lovbjerg M, Rasmussen TK, Krink T (2001) Hybrid particle swarm optimiser with breeding and subpopulations. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO). San Francisco, CA, July 2001

  19. Higashi N, Iba H (2003) Particle swarm optimization with Gaussian mutation. In: Proc. of the IEEE Swarm Intelligence Symp. Indianapolis: IEEE Inc pp 72–79

  20. Murphy PM and Aha DW (1994). UCI repository of machine learning databases. Dept. Inf. Comput. Sci., Univ. California, Irvine, CA

    Google Scholar 

  21. Prechelt L (1994). Proben1—A set of benchmarks and benchmarking rules for neural network training algorithms. Univ. Karlsruhe, Karlsruhe, Germany

    Google Scholar 

  22. Kohavi R (1995) A study of cross–validation and bootstrap for accuracy estimation and model selection. In: proceeding of the fourteenth international joint conference on artificial intelligence. Morgan Kaufmann, San Francisco, CA, pp 1137–1143

  23. Friedman N (1997). Bayesian network classifiers. Mach Learn 29: 131–163

    Article  MATH  Google Scholar 

  24. Quinlan JR (1993). C4.5: programs for machine learning. Morgan Kaufmann, San Francisco

    Google Scholar 

  25. Michie D, Spiegelhalter DJ and Taylor CC (1994). Machine learning, neural and statistical classification. Ellis Horwood Limited, London, UK

    MATH  Google Scholar 

  26. Stone M (1974). Cross–validation choice and assessment of statistical predictions. J Royal Stat Soc 36: 111–147

    MATH  Google Scholar 

  27. Liu Y, Yao X and Tetsuya HC (2000). Evolutionary ensemble with negative correlation learning. IEEE Transaction on Evolutionary Computation 4(4): 380–387

    Article  Google Scholar 

  28. Abbass HA (2001). A memetic pareto evolutionary approach to artificial neural networks. Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence. Lect Notes Comput Sci 2256: 1–12

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianbo Yu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yu, J., Xi, L. & Wang, S. An Improved Particle Swarm Optimization for Evolving Feedforward Artificial Neural Networks. Neural Process Lett 26, 217–231 (2007). https://doi.org/10.1007/s11063-007-9053-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-007-9053-x

Keywords

Navigation