ئەز   Salah Gazi Shareef


Assistant professor

Specialties

Optimization

Education

Ph.D

Mosul لە Mosul

2005

Master degree

Mathematics لە Mosul

2000

Bachelor's degree

Mathematics لە Kabardino-Balkarian / Russia

1991

Academic Title

Assistant professor

2014-06-29

Lecture

2005-05-24

Assist lecture

2000-03-13

Published Journal Articles

International Journal of Analysis and Applications (Volume : 21)
A Combined Conjugate Gradient Quasi-Newton Method with Modification BFGS Formula

Abstract. The conjugate gradient and Quasi-Newton methods have advantages and drawbacks, as although quasi-Newton algorithm... See more

Abstract. The conjugate gradient and Quasi-Newton methods have advantages and drawbacks, as although quasi-Newton algorithm has more rapid convergence than conjugate gradient, they require more storage compared to conjugate gradient algorithms. In 1976, Buckley designed a method that combines the CG method with QN updates, which is better than that observed for conjugate gradient algorithms but not as good as the quasi-Newton approach. This type of method is called the pre- conditioned conjugate gradient (PCG) method. In this paper, we introduce two new preconditioned conjugate gradient (PCG) methods that combine conjugate gradient with a new update of quasi- Newton methods. The new quasi-Newton method satisfied the positive define, and the direction of the new preconditioned conjugate gradient is descent direction. In numerical results, it is showing the new preconditioned conjugate gradient method is more effective on several high-dimension test problems than standard preconditioning.

 2023-03
Journal of University of Duhok (Issue : 2) (Volume : 25)
A NEW CONJUGATE GRADIENT WITH GLOBAL CONVERGES FOR NONLINEAR PROBLEMS

The conjugate gradient(CG) method is one of the most popular and well-known iterative strategies for... See more

The conjugate gradient(CG) method is one of the most popular and well-known iterative strategies for solving minimization problems, it has extensive applications in many domains such as machine learning, neural networks, and many other fields, partly because to its simplicity in algebraic formulation and implementation in codes of computer and partially due to their efficiency in solving largescale unconstrained optimization problems.Fletcher/Reeves (C, 1964)expanded the concept to nonlinear problems.In 1964, and this is widely regarded as the first algorithm of nonlinear conjugate gradient.Since then, other conjugate gradient method versions have been proposed. In this paper and in section one, we derivea new conjugate gradient for solving nonlinear minimization problems based on parameter of Perry. In section two we will satisfy some conditions like descent and sufficient descent conditions.In section three ,we will study the global convergence of new suggestion. We present numerical findings in the fourth part to demonstrate the efficacy of the suggestion technique. Finally, we provide a conclusion.

 2023-01
Science Journal of University of Zakho (Issue : 4) (Volume : 10)
A NEW MODIFIED CONJUGATE GRADIENT FOR NONLINEAR MINIMIZATION PROBLEMS

The conjugate gradient is a highly effective technique to solve theunconstrained nonlinear minimization problemsand it... See more

The conjugate gradient is a highly effective technique to solve theunconstrained nonlinear minimization problemsand it is one of the most well-known methods. It has a lot of applications. For large-scaleand unconstrained minimization problems, conjugate gradient techniques are widely applied.In this paper, we will suggesta newparameter of conjugate gradient tosolvethenonlinear unconstrained minimization problems,based onthe parameter ofDai and Liao. We will study theproperty of the descent ,the property of the sufficient descent and property of the global convergence of thenew method. We introduce some numerical data to prove the efficacy of the our method.

 2022-10
Mathematics and Statistics (Issue : 3) (Volume : 10)
A Descent Conjugate Gradient Method With Global Converges Properties for Non-Linear Optimization

Iterative methods such as the conjugate gradient method are well known methods for solving non-linear... See more

Iterative methods such as the conjugate gradient method are well known methods for solving non-linear unconstrained minimization problems partially because of their capacity to handle large-scale unconstrained optimization problems rapidly, and partly due to their algebraic representation and implementation in computer programs. The conjugate gradient method has wide applications in a lot of fields such as machine learning, neural networks and many other fields. Fletcher and Reeves [1] expanded the approach to nonlinear problems in 1964. It is considered to be the first nonlinear conjugate gradient technique. Since then, lots of new other conjugate gradient methods have been proposed. In this work, we will propose a new coefficient conjugate gradient method to find the minimum of the non-linear unconstrained optimization problems based on parameter of Hestenes Stiefel. Section one in this work contains the derivative of new method. In section two, we will satisfy the descent and sufficient descent conditions. In section three, we will study the property of the global convergence of the new proposed. In the fourth section, we will give some numerical results by using some known test functions and compare the new method with Hestenes S. to demonstrate the effectiveness of the suggestion method. Finally, we will give conclusions.

 2022-06
Mathematics and Statistics (Issue : 6) (Volume : 9)
A Modified Perry's Conjugate Gradient Method Based on Powell's Equation for Solving Large-Scale Unconstrained Optimization

It is known that the conjugate gradient method is still a popular method for many... See more

It is known that the conjugate gradient method is still a popular method for many researchers who are focused in solving the large-scale unconstrained optimization problems and nonlinear equations because the method avoids the computation and storage of some matrices so the memory’s requirements of the method are very small. In this work, a modified of Perry conjugate gradient method which fulfills a global convergence with standard assumptions is shown and analyzed. The idea of new method is based on Perry method by using the equation which is founded via Powell in 1978. The weak Wolfe–Powell search conditions is used to choose the optimal line search, under the line search and suitable conditions we prove both descent and sufficient descent conditions. In particular, numerical results show that the new conjugate gradient method is more effective and competitive when compared to other of standard conjugate gradient methods including: - CG- Hestenes and Stiefel (H/S) method, CG-Perry method CG- Dai and Yuan (D/Y)method. The comparison is completed under a group of standard test problems with various dimensions from the CUTEst test library and the comparative performances of the methods are evaluated by total the number of iterations and the total number of function evaluations.

 2021-11
General Letters in Mathematics (GLM) (Issue : 2) (Volume : 9)
A New conjugate gradient method for unconstrained optimization problems with descent property

In this paper, we propose a new conjugate gradient method for solving nonlinear unconstrained optimization.... See more

In this paper, we propose a new conjugate gradient method for solving nonlinear unconstrained optimization. The new method consists of three parts, the first part of them is the parameter of Hestenes-Stiefel (HS). The proposed method is satisfying the descent condition, sufficient descent condition and conjugacy condition. We give some numerical results to show the efficiency of the suggested method.

 2021-05
Refaad (Volume : 9)
A new self-scaling variable metric (DFP) method for unconstrained optimization problems

In this study, a new self-scaling variable metric (VM)-updating method for solving nonlinear unconstrained optimization... See more

In this study, a new self-scaling variable metric (VM)-updating method for solving nonlinear unconstrained optimization problems is presented. The general strategy of (New VM-updating) is to propose a new quasi-newton condition used for update the usual DFP Hessian to a number of times in a way to be specified in some iteration with PCG method to improve the performance of the Hessian approximation. We show that it produces a positive definite matrix. Experimental results indicate that the new suggested method was more efficient than the standard DFP method, with respect to the number of functions evaluations (NOF) and number of iterations (NOI).

 2020-06
Refaad
A new class of three-term conjugate Gradient methods for solving unconstrained minimization problems

Conjugate gradient (CG) methods which are usually generate descent search directions, are beneficial for large-scale... See more

Conjugate gradient (CG) methods which are usually generate descent search directions, are beneficial for large-scale unconstrained optimization models, because of its low memory requirement and simplicity. This paper studies the three-term CG method for unconstrained optimization. The modified a three-term CG method based on the formal 𝒕∗ which is suggested by Kafaki and Ghanbari [11], and using some well-known CG formulas for unconstrained optimization. Our proposed method satisfies both (the descent and the sufficient descent) conditions. Furthermore, if we use the exact line search the new proposed is reduce to the classical CG method. The numerical results show that the suggested method is promising and exhibits a better numerical performance in comparison with the three- term (ZHS-CG) method from an implementation of the suggested method on some normal unconstrained optimization test functions

 2019-12
Shareef et al., International Journal of Advanced Trends in Computer Science and Engineering, 8(6), November - December 2019, 3124 - 3128 (Volume : 8)
A New Quasi-Newton (SR1) With PCG Method for Unconstrained Nonlinear Optimization

The quasi-newton equation (QN) plays a key role in contemporary nonlinear optimization. In this paper,... See more

The quasi-newton equation (QN) plays a key role in contemporary nonlinear optimization. In this paper, we present a new symmetric rank-one (SR1) method by using preconditioning conjugate gradient (PCG) method for solving unconstrained optimization problems. The suggested method has an algorithm in which the usual SR1 Hessian is updated. We show that the new quasi-newton (SR1) method maintains the Quasi- Newton condition and the positive definite property. Numerical experiments are reported which produces by the new algorithm better numerical results than those of the normal (SR1) method by using PCG algorithm based on the number of iterations (NOI) and the number of functions evaluation (NOF).

 2019-12
Journal of University of Duhok (Volume : 22)
MODIFIED CONJUGATE GRADIENT METHOD FOR TRAINING NEURAL NETWORKS BASED ON LOGISTIC MAPPING

In this paper, we suggested a modified conjugate gradient method for training neural network which... See more

In this paper, we suggested a modified conjugate gradient method for training neural network which assurance the descent and the sufficient descent conditions. The global convergence of our proposed method has been studied. Finally, the test results present that, in general, the modified method is more superior and efficient when compared to other standard conjugate gradient methods

 2019-03
science journal of university of Zakho (Volume : 7)
A NEW CONJUGATE GRADIENT COEFFICIENT FOR UNCONSTRAINED OPTIMIZATION BASED ON DAI-LIAO

Conjugate gradient method plays an enormous role in resolving unconstrained optimization problem, particularly for large... See more

Conjugate gradient method plays an enormous role in resolving unconstrained optimization problem, particularly for large scale. In this paper, a new conjugate gradient method for unconstrained optimization based on Dai-Liao (DL) formula by using Barzilai and Borwein step size. Our new method satisfies both descent and sufficient descent conditions. The numerical results show that the proposed algorithm is potentially efficient and performs better than with Polak and Ribiere (PR) algorithm, depending on number of iterations (NOI) and the number of functions evaluation (NOF).

 2019-03
Journal University of Zakho (Volume : 4)
A NEW CONJUGATE GRADIENT FOR UNCONSTRAINED OPTIMIZATION BASED ON STEP SIZE OF BARZILAI AND BORWEIN

In this paper, a new formula of 􀢼􀢑 is suggested for conjugate gradient method of... See more

In this paper, a new formula of 􀢼􀢑 is suggested for conjugate gradient method of solving unconstrained optimization problems based on step size of Barzilai and Borwein. Our new proposed CG-method has descent condition, sufficient descent condition and global convergence properties. Numerical comparisons with a standard conjugate gradient algorithm show that this algorithm very effective depending on the number of iterations and the number of functions evaluation.

 2016-04

Thesis

2005-04-10
On Optimally Conditional Quasi- Newton Update

In Partial fulfillment of the Requirements for the Degree of Doctor of Philosophy in Mathematics

 2005
1999-11-27
The Effect of Line Search Procedures on the Effeciency of PCG Method

As a Partial fulfillment of Requirements for the Degree of Master of science in Mathematics

 1999

Conference

First international scientific conference, university of zakho
 2013-04
Conference

University of zakho

Training Course

2001-08-16,2001-08-19
T

Training course on computer for master's students

 2001
2001-07-29,2001-08-10
Training course on teaching for master's students

Training course on teaching methods

 2001