IJISA Vol. 10, No. 11, 8 Nov. 2018
Cover page and Table of Contents: PDF (size: 1208KB)
Full Text (PDF, 1208KB), PP.64-75
Views: 0 Downloads: 0
Particle Swarm Optimization algorithm, Quantum Tunneling, Artificial Neural Networks, Global Optimization, Nelder Mead, Feedforward Neural Networks
In this paper a new Quantum Tunneling Particle Swarm Optimization (QTPSO) algorithm is proposed and applied to the training of feedforward Artificial Neural Networks (ANNs). In the classical Particle Swarm Optimization (PSO) algorithm the value of the cost function at the location of the personal best solution found by each particle cannot increase. This can significantly reduce the explorative ability of the entire swarm. In this paper a new PSO algorithm in which the personal best solution of each particle is allowed to tunnel through hills in the cost function analogous to the Tunneling effect in Quantum Physics is proposed. In quantum tunneling a particle which has insufficient energy to cross a potential barrier can still cross the barrier with a small probability that exponentially decreases with the barrier length. The introduction of the quantum tunneling effect allows particles in the PSO algorithm to escape from local minima thereby increasing the explorative ability of the PSO algorithm and preventing premature convergence to local minima. The proposed algorithm significantly outperforms three state-of-the-art PSO variants on a majority of benchmark neural network training problems.
Geraldine Bessie Amali. D, Dinakaran. M, "A New Quantum Tunneling Particle Swarm Optimization Algorithm for Training Feedforward Neural Networks", International Journal of Intelligent Systems and Applications(IJISA), Vol.10, No.11, pp.64-75, 2018. DOI:10.5815/ijisa.2018.11.07
[1]A. Roy and M.M. Noel, “Design of a high speed line following robot that smoothly follows tight curves”, Computers and Electrical Engineering, vol. 56, pp. 732-747, 2016.
[2]F .M. Ham and I. Kostanic., Principles of Neurocomputing for Science and Engineering, vol.1, McGraw-Hill Companies, New York, N.Y, 2001.
[3]M. M. Noel, “A new gradient based particle swarm optimization algorithm for accurate computation of global minimum”, Applied Soft Computing, vol. 12 (1), pp. 353-359, 2012.
[4]F. N. Sibai, H. I. Hosani, R.. M. Naqbi, S. Dhanhani and S. Shehhi., “Iris Recognition using artificial neural networks”, Expert Systems with Applications, vol. 38 (5), pp. 5940–5946, 2011.
[5]K. Smith and J. Gupta., “Continuous Function Optimisation via Gradient Descent on a Neural Network Approximation Function”, In: Proc. Of International Work Conference Artificial Neural Networks, Berlin, Hieldelberg, pp. 741-748, 2001.
[6]E. Adawy, “A SOFT-backpropagation algorithm for training neural networks”, In: Proc. Of the Nineteenth National Radio Science Conference, Alexandria, Egypt, pp. 397-404, 2002.
[7]M. Gori and A. Tesi, “On the problem of local minima in backpropagation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.14 (1), pp.76-86, 1992.
[8]V.M. Nakarajan and M. M. Noel, “Galactic Swarm Optimization: A new global optimization metaheuristic inspired by galactic motion”, Applied Soft Computing, vol. 38, pp.771-787, 2016.
[9]G. Amali and M. Dinakaran,.,” Solution of the nonlinear least squares problem using a new gradient based genetic algorithm”, ARPN journal of engineering and applied sciences, vol. 11 (21), pp. 12876-12881, 2016.
[10]D. Whiteley, T. Starkweather and C. Bogart, “Genetic Algorithms and neural Networks: Optimizing Connections and Connectivity”, Parallel Computing, vol. 14, pp. 347-361.
[11]R. E. Zi-wu and S. A. Ye, “Improvement of Real-valued Genetic Algorithm and Performance Study”, Acta Electronica Sinica., vol. 2, pp. 0-17, 2007.
[12]Oliker, M. Furst and O. Maimon “Design architectures and training of neural networks with a distributed genetic algorithm”, In: Proc. Of IEEE International Conference on Neural Networks, San Francisco, USA, pp.199-202, 1993.
[13]S. G. Mendivil, O. Castillo and P. Melin, “Optimization of artificial neural network architectures for time series prediction using parallel genetic algorithms”, Soft Computing for Hybrid Intelligent System, vol. 154, pp. 387-399, 2008.
[14]Gudise and G. K. Venayagamoorthy, “Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks”, In: Proc. of the 2003 IEEE in Swarm Intelligence Symposium, Indianapolis, USA, pp.110-117, 2003.
[15]R.C.Eberhart and Y.Shi, “Comparison between Genetic Algorithms and Particle Swarm Optimization”, Evolutionary Programming VII, Lecture Notes in Computer Science, vol.1447, pp. 611-616, 1998.
[16]X. Chen, J. Wang, D. Sun and J. Liang, “A novel hybrid Evolutionary Algorithm based on PSO and AFSA for feedforward neural network training”, In: Proc. Of 4th International Conference on Wireless Communications, Networking and Mobile Computing, Dalian, China, pp. 1-5, 2008.
[17]J. Chen, J. Zheng, P. Wu, L. Zhang, and Q. Wu,., “Dynamic particle swarm optimizer with escaping prey for solving constrained non-convex and piecewise optimization problems”, Expert Systems with Applications, vol. 86, pp. 208-223, 2017.
[18]M.M.Noel and T.C.Janette, “A new continuous optimization algorithm based on sociological model”, In: Proc. of American Control Conference, Portland, USA, pp. 237-242, 2005.
[19]A. Conde,., A. Arriandiaga, J.A Sanchez, E. Portillo, S. Plaza and I. Cabanes, “High-accuracy wire electrical discharge machining using artificial neural networks and optimization techniques”. Robotics and Computer-Integrated Manufacturing, vol. 49, pp. 24-38, 2018.
[20]G.P. Singh and A. Singh, “Comparative study of krill herd, firefly and cuckoo search algorithms for unimodal and multimodal optimization”, International journal of intelligent systems and applications, vol. 6(3), pp. 35-49, 2014.
[21]G. Amali and V. Vijayarajan, “Accurate solution of benchmark linear programming problems using hybrid particle swarm optimization (PSO) algorithms”, International journal of applied engineering research, vol.10 (4), pp. 9101-9110, 2015.
[22]Y. Zhang., D. W. Gong, X. Y. Sun and Y. N. Guo, Y., “A PSO-based multi-objective multi-label feature selection method in classification”. Scientific Reports, vol. 7, pp. 1- 12, 2017.
[23]J.J. Liang and P.N. Suganthan PN, “Dynamic multi-swarm particle swarm optimizer”, In : Proc. of Swarm Intelligence Symposium, Pasadena, USA, pp. 124-129, 2005.
[24]D.G.B. Amali and M. Dinakaran, “A review of heuristic global optimization based artificial neural network training approaches”, International journal of pharmacy and technology, vol. 8(4), pp. 21670-21679, 2016.
[25]S. Jiang, K.S. Chin, K, L. Wang, G. Qu. and K. L. Tsui, “Modified genetic algorithm-based feature selection combined with pre-trained deep neural network for demand forecasting in outpatient department”, Expert Systems with Applications, vol. 82, pp. 216-230, 2017.
[26]J.J. Liang, A.K. Qin, P.N. Suganthan and S. Baskar., “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions”. IEEE transactions on evolutionary computation, vol. 10 (3), pp. 281-295, 2006.
[27]M. Nasir, S. Das, D. Maity, S. Sengupta, U. Halder and P. N, Suganthan ., “A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization”, Information Sciences, vol. 209, pp.16-36, 2012.
[28]W. Chagra, H. Degachi and Ksouri, “Nonlinear model predictive control based on Nelder Mead optimization method”, Nonlinear Dynamics, pp. 1-12, 2017.
[29]E. Zahara and Y. Kao, “Hybrid Nelder Mead simplex search and particle swarm optimization for constrained engineering design problems”, Expert Systems with Applications, vol. 36 (2), pp. 3880-3886, 2009.