Enhancing the Performance of the BackPropagation for Deep Neural Network

Authors

  • Ola Mohammed Surakhi Department of Computer Science Princess Summaya University for Science and Technology Amman
  • Walid A. Salameh Department of Computer Science Princess Summaya University for Science and Technology Amman

DOI:

https://doi.org/10.24297/ijct.v13i12.5279

Keywords:

Neural Networks, Deep Neural Network, Backpropagation, Momentum, learning Rate, Optical Backpropagation, Extended Optical Bp, performance analysis

Abstract

The standard Backpropagation Neural Network (BPNN) Algorithm is widely used in solving many real problems in world. But the backpropagation suffers from different difficulties such as the slow convergence and convergence to local minima. Many modifications have been proposed to improve the performance of the algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology and activation function. This paper will illustrate a new additional version of the Backpropagation algorithm. In fact, the new modification has been done on the error signal function by using deep neural networks with more than one hidden layers. Experiments have been made to compare and evaluate the convergence behavior of these training algorithms with two training problems: XOR, and the Iris plant classification. The results showed that the proposed algorithm has improved the classical Bp in terms of its efficiency.

Downloads

Download data is not yet available.

References

[1] F. M. Silva and L. B. Almeida,”Speeding up backpropagation”, in Advanced Neural Computers, 1990,151-158.
[2] Freeman, J. A., Skapura, D. M., Backpropagation. Neural Networks Algorithm Applications and Programming Techniques, 1992, 89-125.
[3] J. Leonard and M. A. Kramer, “Improvement of the backpropagation algorithm for training neural networks”, Computer Chem. Engng., 1990, 14(3),337-341.
[4] Haykin, S. (2003): Neural Networks: A Comprehensive Foundation, PHI, New –Delhi, India
[5] Minai, A.A., Williams, R.D., Acceleration of back-propagation through learning rate momentum adaptation, Proceedings of the International Joint Conference on Neural Networks, 1990, 1676-1679.
[6] M.A. Otair and W.A. Salameh, An Improved Back-Propagation Neural Networks using a Modified Non-linear Function, Proceedings of the IASTED International Conference, 2004,442-447.
[7] M.A. Otair and W.A. Salameh, Efficient Training of Backpropagation Neural Networks, Neural Network World, Vol. 6, 2006, 291-311. http://www.nnw.cz/obsahy06.html
[8] Rumelhart, D. E., Hinton, G.E., Williams, R. J.: Learning Internal Representations by error Propagation. J. Parallel Distributed Processing: Exploration in the Microstructure of Cognition. (1986).
[9] Rumelhart, D. E., Richard Durbin, Richard Golden, and Yves Chauvin, Backpropagation: theoretical foundations. In Y.Chauvin and D. E Rumelhart (eds). Backpropagation and Connectionist Theory. Lawrence Erlbaum, 1992.
[10] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of AISTATS, 2010,pp. 249–256.

Downloads

Published

2014-10-31

How to Cite

Surakhi, O. M., & Salameh, W. A. (2014). Enhancing the Performance of the BackPropagation for Deep Neural Network. INTERNATIONAL JOURNAL OF COMPUTERS &Amp; TECHNOLOGY, 13(12), 5274–5285. https://doi.org/10.24297/ijct.v13i12.5279

Issue

Section

Research Articles