Global convergence of new conjugate gradient method with inexact line search

Received Feb 13, 2020 Revised Aug 13, 2020 Accepted Oct 1, 2020 In this paper, We propose a new nonlinear conjugate gradient method (FRA) that satisfies a sufficient descent condition and global convergence under the inexact line search of strong wolf powell. Our numerical experiment shaw the efficiency of the new method in solving a set of problems from the CUTEst package, the proposed new formula gives excellent numerical results at CPU time, number of iterations, number of gradient ratings when compared to WYL, DY, PRP, and FR methods.


INTRODUCTION
The optimization problem finds application in several fields, such as pure mathematics, mathematical and computational physics, mathematical physics, fluid dynamics, an traffic routing in telecommunication systems [1], cyber-physical security [2], intelligent transportation systems [3], and smart grids [4]. The conjugate gradient method is an effective one for solving large-scale unconstrained optimization problems because it need not the storage of any matrices. Well-known conjugate gradient methods are [5][6][7][8][9]. Global convergence properties of these methods have been studied [9][10][11][12].
In this paper, we consider the following unconstrained optimization problem: where smooth and its gradient ∇ ( ) is available +1 = + , > 0 0; 1; 2; 3 where a parameter characterizes the CG method and denotes ∇ ( ); The main difference among CG methods is in the formulas of computing their parameters. Some of the well known CG methods are reviewed in [13]. A very famous formula for computing is proposed by Fletcher and Reeves (FR) [5] as following = ‖ ‖ 2 ‖ −1 ‖ 2 Fletcher Reeves [5] (4) where ‖. ‖ Denotes the Euclidean norm. This formula is usually considered the .rst nonlinear CG parameter [14].
Having the direction , the ideal choice for the steplength would be the global minimizer of, conditions that require satisfying. In order to find the step length ( ), we use strong wolf powell (SWP) line search, where (0 < < 1 2 ) and (0 < < 1) Strong Wolfe conditions used for establishing the global convergence in [9,12], and [14][15][16][17]. The pioneer works about the global convergence of FR method with inexact line search was proposed by Al-Baali [18]. He proved that the FR method satisfied the sufficient descent directions and globally convergent under the (SWP) conditions with 0   1 2 , in [9,19]. This result was extended to = 1 2 . It is shown that FR method with the (SWP) line search may not be a descent direction for the case that >  [21][22][23]. The paper is organized as follows, in section 2. We introduce the new algorithm for in section3, we analyze the global convergence property of the new method. Finally, numerical results and conclusion in sections 4 and 5.

NEW ALGORITHM OF FRA
We propose a new for the CG method. The sequence of iteration in the new method is obtained from (2) for which the direction d_k is computed by (3). While the parameter parameter Bk in the new method is; where FRA designed the new modified method by Ahmed Chergui. Note that, for the direction defined by (3), withthe CG parameter computed by (10), we have, So, the new direction is satisfied. In the new CG method, the step is determined by the (SWP). To this aim, we use a backtracking approach to compute the steplength. Now we are ready to propose the algorithm of the new CG method (10) Algorithm 1 Step 1: Given 0 ∈ set k = 1. ∈ (0, 1)set 0 = − 0 = −∇ ( 0 ) Step 2: Compute by (10), (4), (5), (6); (7) Step 3: Compute by (3); if ‖ ‖ = 0, then stop.

THE GLOBAL CONVERGENCE PROPRIETES
In this section, we analyze the convergence of FRA method. To this aim, we made the following assumption: Assumption 1 (H1) The objective function is bounded below on the level set and is continuous and differentiable in neighborhood of the level set = { ∈ ; ( ) < ( 0 )} (H2) The gradient is Lipschitz continuous in , so a constant M ≥ 0 exists, such that The following lemma provides a lower bound for the steplength (generated by Algorithm 1). The result of this lemma will be needed in the rest of this section.

Convergent analysis
Lemma 1 Let the step length is generated by Algorithm 1. Then, under the assumptions H1 and H2, there is a positive constant C such that, Proof: Subtracting from both sides of (10) and using (19) we have therefore; with (10) we obtain: This inequality means that (25) satisfies with C= − (1− ) , the proof is completed. The next lemma is known as Zoutendijk condition [24]. Lemma 2: Suppose assumption 1 hold and is generated by Algorithm 1, then; Proof: From (10) for any we have; Moreover, from the hypothesis (1), we have that { ( )}is a decreasing sequence and has a limit in, which shows that lim →∞ ( +1 ) < +∞ and after (28)

NUMERICAL EXPERIMENT
In this part, we report numerical experiments that indicate the efficiency of the new algorithm. To this aim, we implement the new algorithm (Algorithm 1), Fletcher and Reeves (FR) algorithm and the modified Fletcher and Reeves (FR), WYL [10], DY [9], PRP [6]. The numerical results are given in the different initial points. We considered = 10 −6 , = 0.1, = 0.01, under inexact line search of (SWP). We used MATLAB R2010 the performance results are shown in Figures 1-5 and compare their results obtained from solving of 17 test problems from [25].
In our experiments the stopping tolerance for the algorithms is Also, a failure is reported when > 20000 or when the step length become less than eps=10 -6 . We use we use the performance profiles in [26,27]. The total number of iterations, the total number of function evaluations, and the running time of each algorithm number of function evaluations. It can be seen that the FRA is the best solver with probability around 80%, while the probability of solving a problem as the best solver is around 60%, 26%, 18% and 7% for the FR, PRP, WYL and the DAY respectively. The performance index in. Figure 2 is the total number of iterations. From this figure, we observe that the NEW method (FRA) obtains the most wins on approximately 70% of all test problems an the probability of being best solver is 55%, 29%, 26% and 8% for the FR, PRP, WYL and the DAY respectively. The CPU time is illustrated in Figure 3. From this figure, it can be observed that the NEW is the best algorithm. Another important factor of these three figures is that the graph of the NEW algorithm grows up faster than the other algorithms. From the presented results, we can observe that the FRA method is best than the FR, PRP, WYL and the DAY methods. In solving unconstrained optimization problems. In Table 1, The FRA method was successful in all attempts to achieve the optimal solution, while the other methods failed. -Remark 1:

CONCLUSION
in this paper, we have proposed a new CGmethod named FRA for solving a large-scale unconstrained optimization problem. We proved the global convergence of this method and sufficient descent condition under the inexact line search of (SWP) numerical experiment show that the new method FRA is more efficient than the others methods DAY, WYL, FR, and PRP.