ئەز   Alaa Luqman Ibrahim


Lecturer

Specialties

Optimization Operation Research Numerical Analysis

Education

M.Sc. in Optimization

University of Zakho لە University of Zakho

2017

B.Sc. in Mathematics

Department of Mathematics لە University of Duhok

2013

Membership


2017

2017-10-25,2019-12-01
Website Committee in the Dept. of Mathematics

College of Science - University of Duhok

2017-09-20,2017-11-15
Supervision committee of the direct exam in dept.

Dept. of Mathematics - College of Science - University of Duhok

2017-04-30,2018-09-01
Exam Committee in dept.

Dept. of Mathematics - College of Science - University of Duhok

2016

2016-10-23,2017-07-01
TQA Committee in dept.

Dept. of Mathematics - College of Science - University of Duhok

2013

2013-12-11,2014-07-01
Committee to register students' names for the academic year (2013-2014) in College of Science - University of Duhok

College of Science - University of Duhok

Academic Title

Lecturer

2021-06-06

Asst. Lecturer

2018-06-03

Assistant Researcher

2013-11-06

Published Journal Articles

Network Modeling Analysis in Health Informatics and Bioinformatics (Volume : 14)
An improved three-term conjugate gradient approach for deep neural network training in ECG signal classification

In this paper, a three-term conjugate gradient method is proposed as an advanced optimization technique... See more

In this paper, a three-term conjugate gradient method is proposed as an advanced optimization technique for enhancing ECG signal classification using deep learning. This method is designed for solving unconstrained optimization problems by leveraging a modified gradient difference vector while ensuring sufficient descent and convergence properties. The results demonstrate that the proposed method outperforms traditional techniques in terms of the number of iterations, function evaluations, and computational time required to reach a solution. It was also applied to train deep neural networks for ECG signal classification, achieving an accuracy of 96.32%, which is a significant improvement over existing models. This integration of deep learning and advanced gradient-based optimization techniques leads to higher classification accuracy for ECG signals, with a notable reduction in mean squared error, indicating better generalization capability and reduced prediction errors compared to other methods. These findings highlight the potential of the proposed approach in improving ECG classification accuracy, contributing to more efficient and precise diagnosis of heart diseases, ultimately assisting healthcare professionals in making faster and more accurate clinical decisions.

 2025-11
International Journal of Applied and Computational Mathematics (Volume : 11)
Adjusting Parameters for Conjugate Gradient Method for Training Neural Networks and Solving Unconstrained Minimization Problems

This work investigates two newly developed modified conjugate gradient (CG) algorithms, namely BBM1 and BBM2,... See more

This work investigates two newly developed modified conjugate gradient (CG) algorithms, namely BBM1 and BBM2, designed to enhance the performance of unconstrained optimization problems and improve neural network training. These algorithms incorporate advanced derivative-based techniques while employing flexible line search strategies to ensure sufficient descent directions. The proposed methods are supported by a solid theoretical framework that guarantees global convergence under suitable assumptions. Extensive numerical experiments demonstrate that both BBM1 and BBM2 outperform classical CG algorithms, such as the Dai-Yuan (DY) method, in terms of computational efficiency, convergence speed, and robustness. Moreover, the applicability of these algorithms is validated through their successful deployment in training recurrent neural networks, showcasing their capability to address large-scale nonlinear optimization tasks. The promising results confirm the potential of new methods as effective tools for various optimization problems and artificial intelligence (AI) applications.

 2025-09
International Journal of Neutrosophic Science (IJNS) (Issue : 1) (Volume : 27)
Solving Unconstrained Minimization Problems and Training Neural Networks via Enhanced Conjugate Gradient Algorithms

Artificial neural networks have become a cornerstone of modern artificial intelligence, powering progress in a... See more

Artificial neural networks have become a cornerstone of modern artificial intelligence, powering progress in a wide range of fields. Their effective training heavily depends on techniques from unconstrained optimization, with iterative methods based on gradients being especially common. This study presents a new variant of the conjugate gradient method tailored specifically for unconstrained optimization tasks. The method is carefully designed to meet the sufficient descent condition and ensures global convergence. Comprehensive numerical testing highlights its advantages over traditional conjugate gradient techniques, showing improved performance in terms of iteration counts, function evaluations, and overall computational time across a variety of problem sizes. Additionally, this new approach has been successfully used to improve neural network training. Experimental results show faster convergence and better accuracy, with fewer training iterations and reduced mean squared error compared to standard methods. Overall, this work offers a meaningful contribution to optimization strategies in neural network training, displaying the method is potential to tackle the complex optimization problems often encountered in machine learning.

 2025-09
Zanco Journal of Pure and Applied Sciences (Issue : 3) (Volume : 37)
Improved Dai–Liao Conjugate Gradient Methods for Large-Scale Unconstrained Optimization

This research introduces and evaluates two enhanced conjugate gradient methods for unconstrained optimization, building upon... See more

This research introduces and evaluates two enhanced conjugate gradient methods for unconstrained optimization, building upon the Dai–Liao conjugacy condition and further refined through the application of Taylor series expansion. These novel methodologies were rigorously compared against the classical Hestenes-Stiefel (HS) method using a diverse suite of benchmark test functions. The numerical results obtained unequivocally demonstrate a significant improvement in computational efficiency achieved by the proposed methods. Notably, our enhanced methods consistently outperformed the HS method across several critical performance metrics, including a reduction in the number of iterations required for convergence, a decrease in the total number of function evaluations, and an overall faster computation time.

 2025-06
Al-Rafidain Journal of Computer Sciences and Mathematics (RJCSM) (Issue : 1) (Volume : 19)
Improved Dai–Liao Conjugate Gradient Methods for Large-Scale Unconstrained Optimization

This research introduces and evaluates two enhanced conjugate gradient methods for unconstrained optimization, building upon... See more

This research introduces and evaluates two enhanced conjugate gradient methods for unconstrained optimization, building upon the Dai–Liao conjugacy condition and further refined through the application of Taylor series expansion. These novel methodologies were rigorously compared against the classical Hestenes-Stiefel (HS) method using a diverse suite of benchmark test functions. The numerical results obtained unequivocally demonstrate a significant improvement in computational efficiency achieved by the proposed methods. Notably, our enhanced methods consistently outperformed the HS method across several critical performance metrics, including a reduction in the number of iterations required for convergence, a decrease in the total number of function evaluations, and an overall faster computation time.

 2025-06
EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS (Issue : 18) (Volume : 2)
A Globally Convergent Conjugate Gradient Method Incorporating Perry’s Parameter for Unconstrained Optimization

To develop a conjugate gradient method that is both theoretically robust and practically effective for... See more

To develop a conjugate gradient method that is both theoretically robust and practically effective for solving unconstrained optimization problems, this paper introduces a novel conjugate gradient method incorporating Perry’s parameter along with the gradient-difference vector, as suggested by Powell, to enhance performance. The proposed method satisfies the descent condition, and its global convergence is established under standard assumptions. To assess its effectiveness, the method was tested on a diverse set of unconstrained optimization problems and compared against well-known conjugate gradient methods. Numerical experiments indicate that the proposed method outperforms classical approaches in terms of iteration count, function evaluations, and computational time. The results confirm the robustness and efficiency of the proposed method, making it a competitive choice for large-scale optimization problems.

 2025-05
Numerical Algebra, Control and Optimization
Conjugate gradient techniques: Enhancing optimization efficiency for large-scale problems and image restoration

Conjugate gradient methods are widely recognized for their efficiency in solving large-scale optimization problems with... See more

Conjugate gradient methods are widely recognized for their efficiency in solving large-scale optimization problems with minimal memory requirements. This paper introduces a new CG algorithm derived from a modified secant condition, utilizing a redefined difference gradient vector to improve numerical stability and optimization performance. By integrating this modified vector into the computation of the conjugacy parameter βk, we propose a new parmeter, βABMk, which effectively resolves the issues of non-negativity and convergence that are encountered in existing methods. Numerical experiments confirm the superior performance of the proposed method on unconstrained optimization problems across various dimensions, demonstrating faster convergence rates, fewer iterations, and reduced computational times compared to traditional CG (HS, DL, and LS) methods. Additionally, the algorithm's practical effectiveness is validated in medical image restoration tasks, specifically in reducing salt-and-pepper noise in grayscale and color images. When evaluated against established methods like HS, FHTTCGM-N, and HZ, the proposed algorithm achieves notable improvements in metrics such as iteration count, computational time, mean squared error, peak signal-to-noise ratio, and structural similarity index. These results underscore the algorithms significant contributions to both the theoretical and practical advancements of CG methods in optimization and image restoration.

 2025-04
Neural Computing and Applications
Improving three-term conjugate gradient methods for training artificial neural networks in accurate heart disease prediction

This study presents a new optimization technique, the three-term conjugate gradient method, which enhances the... See more

This study presents a new optimization technique, the three-term conjugate gradient method, which enhances the efficiency of solving unconstrained optimization problems and its application in training artificial neural networks to predict heart diseases. It incorporates a gradient condition and a modified gradient difference vector. The method is validated through its global convergence and sufficient condition, ensuring both mathematical accuracy and computational stability. Compared to traditional conjugate gradient methods and three-term conjugate gradient methods, the proposed method demonstrates superior performance, characterized by fewer iterations, reduced function evaluations, and lower computational time. The method is also used to teach artificial neural networks how to predict heart disease, with an 88.15% success rate and precision, recall, and F1-Score values of 0.87, 0.86, and 0.87, respectively. These results confirm the method’s effectiveness in enhancing the predictive power of artificial neural networks making it a valuable tool for clinical applications. This study underscores the potential of the new method to improve decision-support systems in healthcare, aiding early detection and better patient outcomes. Future research may focus on refining the method and integrating it with other machine learning techniques to create advanced real-time predictive systems that further advance intelligent healthcare solutions.

 2025-03
Engineering Reports (Issue : 7) (Volume : 2)
Improved Conjugate Gradient Methods for Unconstrained Minimization Problems and Training Recurrent Neural Network

This research introduces two conjugate gradient methods, BIV1 and BIV2, designed to enhance the efficiency... See more

This research introduces two conjugate gradient methods, BIV1 and BIV2, designed to enhance the efficiency and performance of unconstrained optimization problems with only first derivative vectors. The study explores the derivation of new conjugate gradient parameters and investigates their practical performance. The proposed BIV1 and BIV2 methods are compared with the traditional Hestenes-Stiefel (HS) method through a series of numerical experiments. These experiments evaluate the methods on various test problems sourced from the CUTE library and other unconstrained problem collections. Key performance metrics, including the number of iterations, function evaluations, and CPU time, demonstrate that both BIV1 and BIV2 methods offer superior efficiency and effectiveness compared to the HS method. Furthermore, the effectiveness of these methods is illustrated in the context of training artificial neural networks. Experimental results show that the new methods achieve competitive performance in terms of convergence rate and accuracy.

 2025-02
Advances in Nonlinear Variational Inequalities (Issue : 28) (Volume : 7)
New Various Dai-Liao Method for Solving Optimization Problems

This study introduces and evaluates three new conjugate gradient methods for unconstrained optimization based on... See more

This study introduces and evaluates three new conjugate gradient methods for unconstrained optimization based on Perry's conjugacy condition. These methods were compared with the classical Hestenes-Stiefel method using a variety of test functions. Numerical results demonstrated that the proposed methods significantly improved computational efficiency. Three main parameters were considered: the number of iterations, the number of function evaluations, and the computation time. These findings establish the proposed methods as competitive alternatives for large-scale optimization problems. Future research could focus on extending these methods to more complex problem domains.

 2025-02
European Journal of Pure and Applied Mathematics (Issue : 4) (Volume : 17)
A NEW CONJUGATE GRADIENT METHOD BASED ON LOGISTIC MAPPING FOR UNCONSTRAINED OPTIMIZATION AND ITS APPLICATION IN REGRESSION ANALYSIS

The study tackles the critical need for efficient optimization techniques in unconstrained optimization problems, where... See more

The study tackles the critical need for efficient optimization techniques in unconstrained optimization problems, where conventional techniques often suffer from slow and inefficient convergence. There is still a need for algorithms that strike a balance between computational efficiency and robustness, despite advancements in gradient - based techniques. This work introduces a novel conjugate gradient algorithm based on the logistic mapping formula. As part of the methodology, descent conditions are established, and the suggested algorithm's global convergence properties are thoroughly examined. Comprehensive numerical experiments are used for empirical validation, and the new algorithm is compared to the Polak - Ribière - Polyak (PRP) algorithm. The suggested approach performs better than the PR algorithm, according to the results, and is more efficient since it needs fewer function evaluations and iterations to reach convergence. Furthermore, the usefulness of the suggested approach is demonstrated by its actual use in regression analysis, notably in the modelling of population estimates for the Kurdistan Region of Iraq. In contrast to conventional least squares techniques, the method maintains low relative error rates while producing accurate predictions. All things considered, this study presents the novel conjugate gradient algorithm as an effective tool for handling challenging optimization problems in both theoretical and real -world contexts.

 2024-10
European Journal of Pure and Applied Mathematics (Issue : 4) (Volume : 17)
Enhanced Conjugate Gradient Method for Unconstrained Optimization and Its Application in Neural Networks

In this study, we present a novel conjugate gradient method specifically designed for addressing with... See more

In this study, we present a novel conjugate gradient method specifically designed for addressing with unconstrained optimization problems. Traditional conjugate gradient methods have shown effectiveness in solving optimization problems, but they may encounter challenges when dealing with unconstrained problems. Our method addresses this issue by introducing modifications that enhance its performance in the unconstrained setting. We demonstrate that, under certain conditions, our method satisfies both the descent and the sufficient descent criteria and establishes global convergence, ensuring progress towards the optimal solution at each iteration. Moreover, we establish the global convergence of our method, providing confidence in its ability to find the global optimum. To showcase the practical applicability of our approach, we apply this novel method to a dataset, applying a feed-forward neural network value estimation for continuous trigonometric function value estimation. To evaluate the efficiency and effectiveness of our modified approach, we conducted numerical experiments on a set of well-known test functions. These experiments reveal that our algorithm significantly reduces computational time due to its faster convergence rates and increased speed in directional minimization. These compelling results highlight the advantages of our approach over traditional conjugate gradient methods in the context of unconstrained optimization problems

 2024-10
Indonesian Journal of Electrical Engineering and Computer Science (Issue : 1) (Volume : 33)
A new conjugate gradient for unconstrained optimization problems and its applications in neural networks

We introduce a novel efficient and effective conjugate gradient approach for large-scale unconstrained optimization problems.... See more

We introduce a novel efficient and effective conjugate gradient approach for large-scale unconstrained optimization problems. The primary goal is to improve the conjugate gradient method's search direction in order to propose a new, more active method based on the modified vectorv_k^\ast, which is dependent on the step size of Barzilai and Borwein. The suggested algorithm features the following traits: (i) The ability to achieve global convergence; (ii) numerical results for large-scale functions show that the proposed algorithm is superior to other comparable optimization methods according to the number of iterations (NI) and the number of functions evaluated (NF); and (iv) training neural networks is done to improve their performance.

 2024-01
Telecommunication, Computing, Electronics and Control (Issue : 22) (Volume : 1)
Two new classes of conjugate gradient method based on logistic mapping

Following the standard methods proposed by Polak-Ribiere-Polyak (P-R), in this work we introduce two new... See more

Following the standard methods proposed by Polak-Ribiere-Polyak (P-R), in this work we introduce two new non-linear conjugate gradient methods for solving unconstraint optimization problem, our new methods based on P-R. Standard method (P-R) have performance well in numerical result but does not satisfy global convergency condition. In this paper we modified double attractive and powerful parameters that have better performance and good numerical result than P-R method, also each of our robust method can satisfies the descent condition and global convergency condition by using wolf condition. More over the second method modified by logistic mapping form, the main novelty is their numerical results and demonstrate performance well with compare to a standard method.

 2024-01
Journal of Technology and Informatics (JoTI) (Issue : 2) (Volume : 4)
Optimizing Accuracy of Stroke Prediction Using Logistic Regression

An unexpected limitation of blood supply to the brain and heart causes the majority of... See more

An unexpected limitation of blood supply to the brain and heart causes the majority of strokes. Stroke severity can be reduced by being aware of the many stroke warning signs in advance. A stroke may result if the flow of blood to a portion of the brain stops suddenly. In this research, we present a strategy for predicting the early start of stroke disease by using Logistic Regression (LR) algorithms. To improve the performance of the model, preprocessing techniques including SMOTE, feature selection and outlier handling were applied to the dataset. This method helped in achieving a balance of class distribution, identifying and removing unimportant features and handling outliers. with the existence of increased blood pressure, body mass, heart conditions, average blood glucose levels, smoking status, prior stroke, and age. Impairment occurs as the brain's neurons gradually die, depending on which area of the brain is affected by the reduced blood supply. Early diagnosis of symptoms can be extremely helpful in predicting stroke and supporting a healthy lifestyle. Furthermore, we performed an experiment using logistic regression (LR) and compared it to a number of other studies that used the same machine learning model, which is logistic regression (LR), and the same dataset. The results showed that our method successfully achieved the highest F1 score and area under curve (AUC) score, which can be a successful tool for stroke disease prediction with an accuracy of 86% compared to the other five studies in the same field. The predictive model for stroke has prospective applications, and as a result, it is still significant for academics and practitioners in the fields of medicine and health sciences.

 2023-04
International Research Journal of Modernization in Engineering Technology and Science (Issue : 1) (Volume : 5)
A New Quasi-Newton Method For Non-Linear Optimization Problems And Its Application In Artificial Neural Networks (ANN)

The Quasi-Newton (QN) method is a widely used stationary iterative method for solving unconstrained optimization... See more

The Quasi-Newton (QN) method is a widely used stationary iterative method for solving unconstrained optimization problems. One particular method within the Quasi-Newton family is the Symmetric Rank-One (SR1) method. In this research, we propose a new variant of the Quasi-Newton SR1 method that utilizes the Barzilai-Borwein step size. Our analysis demonstrates that the updated matrix resulting from the proposed method is both symmetric and positive definite. Additionally, our numerical experiments show that the proposed SR1 method, when combined with the PCG method, is effective in solving unconstrained optimization problems, as evidenced by its low number of iterations and function evaluations. Furthermore, we demonstrate that our proposed SR1 method is more efficient in solving large-scale problems with a varying number of variables compared to the original method. The numerical results of applying the new SR1 method to neural network problems also reveal its effectiveness.

 2023-01
Mathematics and Statistics (Issue : 1) (Volume : 11)
A New Quasi-Newton Method with PCG Method for Nonlinear Optimization Problems

The major stationary iterative method used to solve nonlinear optimization problems is the quasi-Newton (QN)... See more

The major stationary iterative method used to solve nonlinear optimization problems is the quasi-Newton (QN) method. Symmetric Rank-One (SR1) is a method in the quasi-Newton family. This algorithm converges towards the true Hessian fast and has computational advantages for sparse or partially separable problems [1]. Thus, investigating the efficiency of the SR1 algorithm is significant. It's possible that the matrix generated by SR1 update won't always be positive. The denominator may also vanish or become zero. To overcome the drawbacks of the SR1 method, resulting in better performance than the standard SR1 method, in this work, we derive a new vector 𝑦𝑦𝑘𝑘∗ depending on the Barzilai-Borwein step size to obtain a new SR1 method. Then using this updating formula with preconditioning conjugate gradient (PCG) method is presented. With the aid of inexact line search procedure by strong Wolfe conditions, the new SR1 method is proposed and its performance is evaluated in comparison to the conventional SR1 method. It is proven that the updated matrix of the new SR1 method, 𝐻𝐻𝑘𝑘+1𝑛𝑛𝑛𝑛𝑛𝑛, is symmetric matrix and positive definite matrix, given 𝐻𝐻𝑘𝑘 is initialized to identity matrix. In this study, the proposed method solved 13 problems effectively in terms of the number of iterations (NI) and the number of function evaluations (NF). Regarding NF, the new SR1 method also outperformed the classic SR1 method. The proposed method is shown to be more efficient in solving relatively large-scale problems (5,000 variables) compared to the original method. From the numerical results, the proposed method turned out to be significantly faster, effective and suitable for solving large dimension nonlinear equations

 2023-01
International Research Journal of Modernization in Engineering Technology and Science (Issue : 1) (Volume : 5)
A new three-term conjugate gradient method for training neural networks with global convergence

Conjugate gradient methods (CG) constitute excellent neural network training methods that are simplicity, flexibility, numerical... See more

Conjugate gradient methods (CG) constitute excellent neural network training methods that are simplicity, flexibility, numerical efficiency, and low memory requirements. In this paper, we introduce a new three-term conjugate gradient method, for solving optimization problems and it has been tested on artificial neural networks (ANN) for training a feed-forward neural network. The new method satisfied the descent condition and sufficient descent condition. Global convergence of the new (NTTCG) method has been tested. The results of numerical experiences on some wellknown test function shown that our new modified method is very effective, by relying on the number of functions evaluation and number of iterations, also included the numerical results for training feed-forward neural networks with other well-known method in this field.

 2022-10
New Trends in Mathematical Sciences (Issue : 4) (Volume : 10)
A New Version Coefficient of Three-Term Conjugate Gradient Method to Solve Unconstrained Optimization

This paper presents the new three-term conjugate gradient method for solving unconstrained optimization problem. The... See more

This paper presents the new three-term conjugate gradient method for solving unconstrained optimization problem. The main aim is to upgrade the search direction of conjugate gradient method to present a more active new three term method. Our new method satisfies the descent and the sufficient descent conditions and global convergent property. Furthermore, the numerical results show that the new method has a better numerical performance in comparison with the standard (PRP) method from an implementation of our new method on some test functions of unconstrained optimization according to number of iterations (NOI) and the number of functions evaluation (NOF).

 2022-10
General Letters in Mathematics (Issue : 1) (Volume : 9)
A new self-scaling variable metric (DFP) method for unconstrained optimization problems

In this study, a new self-scaling variable metric (VM)-updating method for solving nonlinear unconstrained optimization... See more

In this study, a new self-scaling variable metric (VM)-updating method for solving nonlinear unconstrained optimization problems is presented. The general strategy of (New VM-updating) is to propose a new quasi-newton condition used for update the usual DFP Hessian to a number of times in a way to be specified in some iteration with PCG method to improve the performance of the Hessian approximation. We show that it produces a positive definite matrix. Experimental results indicate that the new suggested method was more efficient than the standard DFP method, with respect to the number of functions evaluations (NOF) and number of iterations (NOI).

 2020-10
International Journal of Advanced Trends in Computer Science and Engineering (Issue : 6) (Volume : 8)
A New Quasi-Newton (SR1) With PCG Method for Unconstrained Nonlinear Optimization

The quasi-newton equation (QN) plays a key role in contemporary nonlinear optimization. In this paper,... See more

The quasi-newton equation (QN) plays a key role in contemporary nonlinear optimization. In this paper, we present a new symmetric rank-one (SR1) method by using preconditioning conjugate gradient (PCG) method for solving unconstrained optimization problems. The suggested method has an algorithm in which the usual SR1 Hessian is updated. We show that the new quasi-newton (SR1) method maintains the Quasi- Newton condition and the positive definite property. Numerical experiments are reported which produces by the new algorithm better numerical results than those of the normal (SR1) method by using PCG algorithm based on the number of iterations (NOI) and the number of functions evaluation (NOF).

 2019-12
General Letters in Mathematics (Issue : 2) (Volume : 7)
A new class of three-term conjugate gradient methods for solving unconstrained minimization problems

Conjugate gradient (CG) methods which are usually generate descent search directions, are beneficial for large-scale... See more

Conjugate gradient (CG) methods which are usually generate descent search directions, are beneficial for large-scale unconstrained optimization models, because of its low memory requirement and simplicity. This paper studies the three-term CG method for unconstrained optimization. The modified a three-term CG method based on the formal 𝒕∗ which is suggested by Kafaki and Ghanbari [11], and using some well-known CG formulas for unconstrained optimization. Our proposed method satisfies both (the descent and the sufficient descent) conditions. Furthermore, if we use the exact line search the new proposed is reduce to the classical CG method. The numerical results show that the suggested method is promising and exhibits a better numerical performance in comparison with the three- term (ZHS-CG) method from an implementation of the suggested method on some normal unconstrained optimization test functions.

 2019-12
Journal of Duhok University (Issue : 1) (Volume : 22)
MODIFIED CONJUGATE GRADIENT METHOD FOR TRAINING NEURAL NETWORKS BASED ON LOGISTIC MAPPING

In this paper, we suggested a modified conjugate gradient method for training neural network which... See more

In this paper, we suggested a modified conjugate gradient method for training neural network which assurance the descent and the sufficient descent conditions. The global convergence of our proposed method has been studied. Finally, the test results present that, in general, the modified method is more superior and efficient when compared to other standard conjugate gradient methods

 2019-10
Science Journal of University of Zakho (Issue : 1) (Volume : 7)
A New Conjugate Gradient Coefficient for Unconstrained Optimization Based On Dai-Liao

This paper, proposes a new conjugate gradient method for unconstrained optimization based on Dai-Liao (DL)... See more

This paper, proposes a new conjugate gradient method for unconstrained optimization based on Dai-Liao (DL) formula; descent condition and sufficient descent condition for our method are provided. The numerical results and comparison show that the proposed algorithm is potentially efficient when we compare with (PR) depending on number of iterations (NOI) and the number of functions evaluation (NOF).

 2019-03
Science Journal of University of Zakho (Issue : 1) (Volume : 4)
A New Conjugate Gradient for Unconstrained Optimization Based on Step Size of Barzilai and Borwein

In this paper, a new formula of is suggested for conjugate gradient method of solving... See more

In this paper, a new formula of is suggested for conjugate gradient method of solving unconstrained optimization problems based on step size of Barzilai and Borwein. Our new proposed CG-method has descent condition, sufficient descent condition and global convergence properties. Numerical comparisons with a standard conjugate gradient algorithm show that this algorithm very effective depending on the number of iterations and the number of functions evaluation.

 2016-06

Thesis

2016-11-20
Some New Algorithms For Unconstrained Optimization And Their Application Neural Networks

In this thesis, we proposed eight new algorithms in the field of unconstrained optimization: four... See more

In this thesis, we proposed eight new algorithms in the field of unconstrained optimization: four algorithms in conjugate gradients and the other four algorithms of PCG Method with self-scaling Quasi-Newton.We suggested that the first three algorithms are based on the development algorithm of the Dia-Liao (DL) and the fourth algorithm on Polak-Ribiere (PR). Some conditions on the above algorithms have been studied. It is found that our proposed algorithms satisfy decent and sufficient decent conditions. Also,the first three algorithms satisfy conjugacy condition,studying the global convergence property for each of the first and third algorithm.Also,four algorithms of PCG Method with self-scaling Quasi-Newton SR1 and DFP are suggested.We show that these algorithms satisfy the Quasi-Newton condition and the positive definite property.The above eight algorithms are applied on some known standard nonlinear test functions. They proved their proficiency while comparing numerical results of these algorithms with those of some other well-known algorithms.In addition to that, we apply our new algorithms in the field of the neural networks. These algorithms are used to increase the efficiency of neural networks when comparing their numerical results with some well-known algorithms.

 2016

Training Course

2016-05-11,2017-05-07
Diploma of Teaching

A course of academic capacity development (Teaching Methods and Research Methodology) at university of duhok for period of six months.

 2016
2014-05-03,2014-08-11
English language proficiency course

A course English language proficiency course in training and development center, university of duhok with (a total of 160 hours).

 2014