Open Access   Article Go Back

Optimizing Error Function of Backpropagation Neural Network

Munmi Gogoi1 , Ashim Jyoti Gogoi2 , Shahin Ara Begum3

Section:Research Paper, Product Type: Journal Paper
Volume-7 , Issue-4 , Page no. 1011-1016, Apr-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7i4.10111016

Online published on Apr 30, 2019

Copyright © Munmi Gogoi, Ashim Jyoti Gogoi, Shahin Ara Begum . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Munmi Gogoi, Ashim Jyoti Gogoi, Shahin Ara Begum, “Optimizing Error Function of Backpropagation Neural Network,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.4, pp.1011-1016, 2019.

MLA Style Citation: Munmi Gogoi, Ashim Jyoti Gogoi, Shahin Ara Begum "Optimizing Error Function of Backpropagation Neural Network." International Journal of Computer Sciences and Engineering 7.4 (2019): 1011-1016.

APA Style Citation: Munmi Gogoi, Ashim Jyoti Gogoi, Shahin Ara Begum, (2019). Optimizing Error Function of Backpropagation Neural Network. International Journal of Computer Sciences and Engineering, 7(4), 1011-1016.

BibTex Style Citation:
@article{Gogoi_2019,
author = {Munmi Gogoi, Ashim Jyoti Gogoi, Shahin Ara Begum},
title = {Optimizing Error Function of Backpropagation Neural Network},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {4 2019},
volume = {7},
Issue = {4},
month = {4},
year = {2019},
issn = {2347-2693},
pages = {1011-1016},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4158},
doi = {https://doi.org/10.26438/ijcse/v7i4.10111016}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i4.10111016}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4158
TI - Optimizing Error Function of Backpropagation Neural Network
T2 - International Journal of Computer Sciences and Engineering
AU - Munmi Gogoi, Ashim Jyoti Gogoi, Shahin Ara Begum
PY - 2019
DA - 2019/04/30
PB - IJCSE, Indore, INDIA
SP - 1011-1016
IS - 4
VL - 7
SN - 2347-2693
ER -

VIEWS PDF XML
630 347 downloads 159 downloads
  
  
           

Abstract

Backpropagation algorithm (BP) is one of the most popularized and effective learning algorithm for learning neural networks, starting with Multilayer perceptron’s (MLP’s) to today’s Deep learning models in the domain of Artificial Intelligence (AI). Backpropagation algorithm works on two phases. The forward phase feed the network with input and communication links with synaptic weights, the activation function decides whether the hidden neurons fire or not. The primary focus of the present work is on the backpropagation error, which decides the amount of weight updating based on the errors. The driving force of the algorithm is to minimize the error by gradient descent where we differentiate the error function to get the gradient of the error and update the weights to reduce the error. In this paper, our approach is to reduce the error of Backpropagation neural network (BPNN) based on constraints using swarm intelligence based optimization method. For this, the optimization problem has been formulated mathematically with subjected constraints under the acceptable range of network parameters. This research investigation presents a comparison of results obtained from solving the minimization problem with different variants of swarm intelligence technique such as PSO, HBPSO, and ALCPSO.

Key-Words / Index Term

Backpropagation, Deep learning, PSO, HBPSO, ALCPSO

References

[1] S. I. Gallant, “Perceptron-based learning algorithms”, IEEE Trans. on Neural Networks, vol. 1, no. 2, pp. 179-191, 1990.
[2] K. Hornik, M. Stinchcombe, H. White, “Multilayer feedforward networks are universal approximators”, Neural Networks, vol. 2, no. 5, pp. 359-366, 1989.
[3] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, “Semi-supervised learning with deep generative models”, in Advances in Neural Information Processing Systems, pp. 3581–3589, 2014.
[4] D. E. Rumelhart , G.E Hinton, R. J. Williams, “Learning internal representations by error propagation”, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1986.
[5] Saduf, M.A. Wani, A. Wani. "Comparative study of back propagation learning algorithms for neural networks" ,International Journal of Advanced Research in Computer Science and Software Engineering 3.12 ,2013.
[6] Y. Le Cun, Y. Bengio,G. Hinton, “Deep Learning”, Nature. 521. P.436, 2015.
[7] N.S. Lele, “Image Classification Using Convolutional Neural Network”, International Journal of Scientific Research in Computer Science and Engineering, Vol.6, Issue.3, pp.22-26, 2018. .
[8] G. E Hinton, S. Osindero, Y. W. The, “A fast learning algorithm for deep belief nets”, Neural computation, vol. 18, no. 7, pp. 1527-1554.
[9] G.E. Hinton, and R.R Salakhutdinov, “Reducing the dimensionality of data with neural networks”, Science, vol. 313, no. 57, pp.504-507, 2006.
[10] P, Chandra, Y. Singh, “An activation function adapting training algorithm for sigmoid feed forward networks”, Neurocomputing vol. 61, pp. 429–437, 2004.
[11] N. K. Bose and P. Liang, “Neural Network Fundamentals with Graph Algorithms and Applications”, McGraw-Hill, 1996
[12] Y. H. Zweiri, L D. Seneviratne, K. Althoefer, “Stability analysis of three term back propagation algorithm”, Neural Networks, vol. 18, pp. 1341-1347, 2005.
[13] S. Haykin, “Neural Networks: A Comprehensive Foundation”, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1999.
[14] C. C. Yu, Bin Da Liu, “A backpropogation algorithm with adaptive learning rate and momentum coefficient”, in IEEE International Joint Conference on Neural Networks, vol. 2, pp. 1218-1223, 2002.
[15] H. Shao, H. Zheng, “A new bp algorithm with adaptive momentum for FNNs training”, in: GCIS 2009, Xiamen, China, pp. 16—20, 2009.
[16] M. Z. Rehman, N. M. Nawi, M. I. Ghazali, “Noise-induced hearing loss (NIHL) prediction in humans using a modified back propagation neural network,” in 2nd International Conference on Science Engineering and Technology, pp. 185—189, 2011.
[17] Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, “Greedy layer-wise training of deep networks”, in Advances in neural information processing systems, pp. 153-160, 2007.
[18] C. Poultney, S. Chopra, Y. L. Cun, “Efficient learning of sparse representations with an energy-based model”, in Advances in neural information processing systems, pp. 1137-1144, 2007.
[19] X. Kai, “Research on the Improvement of BP Neural Network Algorithm and its Application”, Advanced Materials Research, vol. 926, pp. 3216-3219, 2014
[20] Z. Boger, H. Guterman, “Knowledge extraction from artificial neural network models”, in IEEE international conference on Systems, Man, and Cybernetics Orlando, FL, USA 1997.
[21] M.J.A. Berry, G. Linoff, “Data Mining Techniques”, NY: John Wiley & Sons 1997.
[22] S. Marsland “Machine Learning an Algorithmic Prospective” CRC Press,Tylor and Francis group, London, Newyork 2009.
[23] J. Kennedy and R. C. Eberhart, “Particle swarm optimization”, in Proceedings of the IEEE International Conference on Neural Networks, pp 1942-1948, Dec. 1995.
[24] J. Kennedy, “Particle swarm optimization”, Encyclopedia of machine learning. Springer US, pp. 760-766, 2011.
[25] S. S. Rao “Engineering optimization: theory and practice”, John Wiley and Sons, 2009.
[26] K. Kamiyama, “Particle Swarm Optimization-A Survey”, ICICE transactions on information and systems”., vol. 92, no. 7, 2009.
[27] L. Hao, “Human behavior-based particle swarm optimization”, The Scientific World Journal 2014.
[28] Wei-Neng Chen, “Particle swarm optimization with an aging leader and challengers”, IEEE Transactions on Evolutionary Computation 17.2 (2013): 241-258.
[29] A. J. Gogoi, N. M. LASKAR, Ch L.Singh, K. L. Baishnab, and K. Guha, “Throughput Optimization in Multi-user Cognitive Radio Network using Swarm Intelligence Techniques”, Journal of Information Science & Engineering, vol.34, no. 4, 2018.