A new conjugate gradient for unconstrained optimization problems and its applications in neural networks

Alaa Luqman Ibrahim, Mohammed Guhdar Mohammed

Abstract


We introduce a novel efficient and effective conjugate gradient approach for large-scale unconstrained optimization problems. The primary goal is to improve the conjugate gradient method's search direction in order to propose a new, more active method based on the modified vector , which is dependent on the step size of Barzilai and Borwein. The suggested algorithm features the following traits: (i) The ability to achieve global convergence; (ii) numerical results for large-scale functions show that the proposed algorithm is superior to other comparable optimization methods according to the number of iterations (NI) and the number of functions evaluated (NF); and (iii) training neural networks is done to improve their performance.

Keywords


Conjugate gradient method; Global convergence; Neural networks; The descent property; Unconstrained optimization

Full Text:

PDF


DOI: http://doi.org/10.11591/ijeecs.v33.i1.pp93-100

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

shopify stats IJEECS visitor statistics