^{1}

^{2}

^{1}

^{2}

^{3}

^{1}

^{1}

^{1}

The authors have declared that no competing interests exist.

Conceived and designed the experiments: GY XD WL. Performed the experiments: GY XD WL. Analyzed the data: GY XD WL XW ZC ZS. Contributed reagents/materials/analysis tools: GY XD WL XW ZC ZS. Wrote the paper: GY XD WL.

Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)_{k} ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

As we know, the conjugate gradient method is very popular and effective for solving the following unconstrained optimization problem
^{n} → ℜ is continuously differentiable and _{k} = _{k}), _{k} ∈ ℜ is a scalar, _{k} > 0 is a step length that is determined by some line search, and _{k} denotes the search direction. Different conjugate methods have different choices for _{k}. Some of the popular methods [_{k} are the DY conjugate gradient method [_{k} [_{k−1} = _{k}−_{k−1}. The PRP conjugate gradient method is currently considered to have the best numerical performance, but it does not have good convergence. With an exact line search, the global convergence of the PRP conjugate gradient method has been established by Polak and Ribière [_{k} to be not less than zero and proved that it has global convergence, with the hypothesis that it satisfies the sufficient descent condition. Gilbert and Nocedal [_{k} may be negative even though the objective function is uniformly convex. When the Strong Wolfe-Powell line search was used, Dai [

Through the above observations and [_{k} is not less than zero are very important for establishing the global convergence of the conjugate gradient method.

The weak Wolfe-Powell (WWP) line search is designed to compute _{k} and is usually used for the global convergence analysis. The WWP line search is as follows

Recently, many new conjugate gradient methods ([

In Section 2, we state the motivation behind our approach and give a new modified PRP conjugate gradient method and new algorithm for solving problem

Wei et al. [

The NPRP method possesses better convergence properties. The above formula for _{k−1} contains only gradient value information, but some new _{k−1} formulas [_{k−1} formula as follows
_{k−1} = _{k}−_{k−1}.

Li and Qu [

Under suitable conditions, Li and Qu [

Motivated by the above discussions, we propose a new modified PRP conjugate method as follows
_{1} > 0, _{2} > 0,

As

_{1} = −∇_{1}) = −_{1},

_{k} by the WWP line search.

_{k+1} = _{k} + _{k} _{k}, then calculate

_{k+1} by _{k+1} by

Some suitable assumptions are often used to analyze the global convergence of the conjugate gradient method. Here, we state it as follows

The level set Ω = {^{n} ∣ _{1})} is bounded.

In some neighborhood

By Assumption 3.1, it is easy to obtain that there exist two constants _{1} > 0 satisfying

_{k}}

The proof is achieved.

We know directly from above Lemma that our new method has the sufficient descent property.

_{k}} _{k}, _{k}}

Combining the above inequality with Assumption 3.1 ii) generates

Summing up the above inequalities from

By _{k}} is bounded below, so we obtain

This finishes the proof.

The

_{k}, _{k}}

_{k} = 0, we directly get _{k} = 0 from _{k} ≠ 0, by the Cauchy-Schwarz inequality, we can easily obtain

We can obtain

Using

Finally, when

Let

This lemma also shows that the search direction of our algorithm has the trust region property.

_{k}, _{k}, _{k}} _{k}}

By

From the above inequality, we can obtain

When _{k+1} and _{k+1} are calculated by Eqs (

If ∣_{k})∣ ≤ _{2}, let _{3} or _{2} = _{3} = 10^{−6}. When the total number of iterations is greater than one thousand, the test program will be stopped. The test results are given in Tables _{1} denotes the initial point, Dim denotes the dimension of test function, NI denotes the the total number of iterations, and NFG = NF+NG (NF and NG denote the number of the function evaluations and the number of the gradient evaluations, respectively).

Schwefel function:

Langerman function:

Schwefel′s function

Sphere function:

Griewangk function:

Rosenbrock function:

Ackley function:

Rastrigin function:

Problems | Dim | _{1} |
NI/NFG | |
---|---|---|---|---|

1 | 50 | (-426,-426,…,-426) | 2/9 | 6.363783e-004 |

120 | (-426,-426,…,-426) | 2/9 | 1.527308e-003 | |

200 | (-426,-426,…,-426) | 2/9 | 2.545514e-003 | |

1000 | (-410,-410,…,-410) | 3/12 | 1.272757e-002 | |

2 | 50 | (3,3,…,3) | 0/2 | -1.520789e-060 |

120 | (5,5,…,5) | 0/2 | 0.000000e+000 | |

200 | (6,6,…,6) | 0/2 | 0.000000e+000 | |

1000 | (1,1,…,1) | 0/2 | -7.907025e-136 | |

3 | 50 | (-0.00001,0,-0.00001,0,…) | 2/8 | 1.561447e-009 |

120 | (-0.00001,0,-0.00001,0,…) | 2/8 | 1.769900e-008 | |

200 | (-0.00001,0,-0.00001,0,…) | 2/8 | 7.906818e-008 | |

1000 | (0.000001,0,0.000001,0,…) | 2/8 | 9.619586e-008 | |

4 | 50 | (-4,-4,…,-4) | 1/6 | 1.577722e-028 |

120 | (-2,-2,…,-2) | 1/6 | 3.786532e-028 | |

200 | (1,1,…,1) | 1/6 | 7.730837e-027 | |

1000 | (3,3,…,3) | 1/6 | 1.079951e-024 | |

5 | 50 | (-7,0,-7,0,…) | 2/10 | 0.000000e+000 |

120 | (0.592,0,0.592,0,…) | 4/14 | 3.183458e-007 | |

200 | (0.451,0,0.451,0,…) | 4/14 | 3.476453e-007 | |

1000 | (0.38,0,0.38,0,…) | 1/6 | 0.000000e+000 | |

6 | 50 | (1.001,1.001,…,1.001) | 2/36 | 4.925508e-003 |

120 | (1.001,1.001,…,1.001) | 2/36 | 1.198551e-002 | |

200 | (1.001,1.001,…,1.001) | 2/36 | 2.006158e-002 | |

1000 | (1.001,1.001,…,1.001) | 2/36 | 1.009107e-001 | |

7 | 50 | (0.01,0,0.01,0,…) | 0/2 | 3.094491e-002 |

120 | (-0.05,0,-0.05,0,…) | 0/2 | 2.066363e-001 | |

200 | (0.01,0,0.01,0,…) | 0/2 | 3.094491e-002 | |

1000 | (0.07,0,0.07,0,…) | 0/2 | 3.233371e-001 | |

8 | 50 | (0.003,0.003,…,0.003) | 3/26 | 0.000000e+000 |

120 | (0.005,0.005,…,0.005) | 2/9 | 0.000000e+000 | |

200 | (0.006,0,0.006,0,…) | 2/9 | 0.000000e+000 | |

1000 | (0.015,0.015,…,0.015) | 2/8 | 0.000000e+000 |

Problems | Dim | _{1} |
NI/NFG | |
---|---|---|---|---|

1 | 50 | (-426,-426,…,-426) | 2/24 | 6.363783e-004 |

120 | (-426,-426,…,-426) | 2/11 | 1.527308e-003 | |

200 | (-426,-426,…,-426) | 3/41 | 2.545514e-003 | |

1000 | (-410,-410,…,-410) | 3/41 | 1.272757e-002 | |

2 | 50 | (3,3,…,3) | 0/2 | -1.520789e-060 |

120 | (5,5,…,5) | 0/2 | 0.000000e+000 | |

200 | (6,6,…,6) | 0/2 | 0.000000e+000 | |

1000 | (1,1,…,1) | 0/2 | -7.907025e-136 | |

3 | 50 | (-0.00001,0,-0.00001,0,…) | 2/8 | 1.516186e-009 |

120 | (-0.00001,0,-0.00001,0,…) | 2/8 | 1.701075e-008 | |

200 | (-0.00001,0,-0.00001,0,…) | 2/8 | 7.579825e-008 | |

1000 | (0.000001,0,0.000001,0,…) | 2/8 | 9.198262e-008 | |

4 | 50 | (-4,-4,…,-4) | 1/6 | 1.577722e-028 |

120 | (-2,-2,…,-2) | 1/6 | 3.786532e-028 | |

200 | (1,1,…,1) | 1/6 | 7.730837e-027 | |

1000 | (3,3,…,3) | 1/6 | 1.079951e-024 | |

5 | 50 | (-7,0,-7,0,…) | 4/16 | 3.597123e-013 |

120 | (0.592,0,0.592,0,…) | 5/17 | 3.401145e-007 | |

200 | (0.451,0,0.451,0,…) | 5/17 | 4.566281e-007 | |

1000 | (0.38,0,0.38,0,…) | 1/6 | 0.000000e+000 | |

6 | 50 | (1.001,1.001,…,1.001) | 2/36 | 4.925508e-003 |

120 | (1.001,1.001,…,1.001) | 2/36 | 1.198551e-002 | |

200 | (1.001,1.001,…,1.001) | 2/36 | 2.006158e-002 | |

1000 | (1.001,1.001,…,1.001) | 2/36 | 1.009107e-001 | |

7 | 50 | (0.01,0,0.01,0,…) | 0/2 | 3.094491e-002 |

120 | (-0.05,0,-0.05,0,…) | 0/2 | 2.066363e-001 | |

200 | (0.01,0,0.01,0,…) | 0/2 | 3.094491e-002 | |

1000 | (0.07,0,0.07,0,…) | 0/2 | 3.233371e-001 | |

8 | 50 | (0.003,0.003,…,0.003) | 2/10 | 0.000000e+000 |

120 | (0.005,0.005,…,0.005) | 2/10 | 0.000000e+000 | |

200 | (0.006,0,0.006,0,…) | 2/10 | 0.000000e+000 | |

1000 | (0.015,0.015,…,0.015) | 2/22 | 3.636160e-009 |

It is easy to see that the two algorithms are effective for the above eight test problems listed in Tables

For the above eight test problems,

A new algorithm is given for solving nonlinear equations in the next section. The sufficient descent property and the trust region property of the new algorithm are proved in Section 6; moreover, we establish the global convergence of the new algorithm. In Section 7, the numerical results are presented.

We consider the system of nonlinear equations
^{n} → ℜ^{n} is a continuously differentiable and monotonic function. ∇

We know directly that the problem

The iterative formula _{k} and search direction _{k} are very important for dealing with large-scale problems. When dealing with large-scale nonlinear equations and unconstrained optimization problems, there are many popular methods ([_{k}, such as conjugate gradient methods, spectral gradient methods, and limited-memory quasi-Newton approaches. Some new line search methods [_{k}. Li and Li [_{k} = max{^{2} _{3} > 0 and

Solodov and Svaiter [_{k} and certain line search methods to calculate _{k}, which satisfies
_{k} = _{k} + _{k} _{k}. For any

Thus, it is easy to obtain the current iterate _{k}, which is strictly separated from the zeros of the system of equations

Then, the iterate _{k+1} can be obtained by projecting _{k} onto the above hyperplane. The projection formula can be set as follows

Yuan et al. [_{k} is defined as follows
_{k−1} = _{k}−_{k−1}. The derivative-free line search method [_{k}.

Motivated by our new modified PRP conjugate gradient formula, proposed in Section 2, we proposed the following modified PRP conjugate gradient formula
_{3} > 0, _{4} > 0. It is easy to see that

_{1} ∈ ℜ^{n},_{4} > 0,_{3} > 0, _{3} > 0, _{4} > 0, and

_{k} by _{k} by

_{k} = _{k} + _{k} _{k};

_{k+1} = _{k}; otherwise, calculate _{k+1} by

When we analyze the global convergence of Algorithm 5.1, we require the following suitable assumptions.

The solution set of the problem

By Assumption 6.1, it is easy to obtain that there exists a positive constant

_{k}}

Similar to Lemma 3.1 of [

_{k}} _{k}} _{k}} _{k}}

_{k+1} = _{k} + _{k} _{k}

_{5} that satisfies

By Assumption 6.1 (b) and

By Eqs (

Thus, we obtain

Similar to Theorem 3.1 of [

_{k+1}, _{k+1}} _{k}, _{k}}

When the following _{k} formula of the famous PRP conjugate gradient method [_{k} in step 3 of Algorithm 5.1, then it is called PRP algorithm. We test Algorithm 5.1 and the PRP algorithm for some problems in this section. The test environment is MATLAB 7.0 on a Windows 7 system. The initial parameters are given by

When the number of iterations is greater than or equal to one thousand and five hundred, the test program will also be stopped. The test results are given in Tables _{k} satisfies _{k} that is acceptable. NG, NI stand for the number of gradient evaluations and iterations respectively. Dim denotes the dimension of the testing function, and cputime denotes the cpu time in seconds. GF denotes the evaluation of the final function norm when the program terminates. The test functions all have the following form

_{0} = (1,1,⋯,1)^{T}.

_{0} = (−1,−1,⋯,−1)^{T}.

_{0} = (^{T}.

_{0} = (0, 0, ⋯, 0)^{T}.

Function | Dim | NI/NG | cputime | GF |
---|---|---|---|---|

1 | 3000 | 55/209 | 2.043613 | 9.850811e-006 |

5000 | 8/33 | 0.858005 | 6.116936e-006 | |

30000 | 26/127 | 100.792246 | 8.983556e-006 | |

45000 | 7/36 | 62.681202 | 7.863794e-006 | |

50000 | 5/26 | 56.659563 | 5.807294e-006 | |

2 | 3000 | 43/86 | 1.076407 | 8.532827e-006 |

5000 | 42/84 | 2.745618 | 8.256326e-006 | |

30000 | 38/76 | 73.039668 | 8.065468e-006 | |

45000 | 37/74 | 164.284653 | 8.064230e-006 | |

50000 | 36/72 | 201.288090 | 9.519786e-006 | |

3 | 3000 | 5/6 | 0.093601 | 1.009984e-008 |

5000 | 5/6 | 0.249602 | 6.263918e-009 | |

30000 | 18/33 | 32.775810 | 2.472117e-009 | |

45000 | 21/39 | 91.229385 | 2.840234e-010 | |

50000 | 21/39 | 108.202294 | 2.661223e-010 | |

4 | 3000 | 95/190 | 2.137214 | 9.497689e-006 |

5000 | 97/194 | 5.834437 | 9.048858e-006 | |

30000 | 103/206 | 194.954450 | 8.891642e-006 | |

45000 | 104/208 | 446.568463 | 9.350859e-006 | |

50000 | 104/208 | 549.529123 | 9.856874e-006 | |

5 | 3000 | 64/128 | 1.497610 | 9.111464e-006 |

5000 | 65/130 | 4.102826 | 9.525878e-006 | |

30000 | 70/140 | 132.117247 | 8.131796e-006 | |

45000 | 70/140 | 297.868309 | 9.959279e-006 | |

50000 | 71/142 | 374.964004 | 8.502923e-006 | |

6 | 3000 | 1/2 | 0.031200 | 0.000000e+000 |

5000 | 1/2 | 0.062400 | 0.000000e+000 | |

30000 | 1/2 | 1.918812 | 0.000000e+000 | |

45000 | 1/2 | 4.258827 | 0.000000e+000 | |

50000 | 1/2 | 5.194833 | 0.000000e+000 | |

7 | 3000 | 35/71 | 0.842405 | 9.291878e-006 |

5000 | 34/69 | 2.121614 | 8.658237e-006 | |

30000 | 30/61 | 58.391174 | 8.288490e-006 | |

45000 | 29/59 | 135.627269 | 8.443996e-006 | |

50000 | 29/58 | 153.801386 | 9.993530e-006 | |

8 | 3000 | 0/1 | 0.015600 | 0.000000e+000 |

5000 | 0/1 | 0.046800 | 0.000000e+000 | |

30000 | 0/1 | 1.326008 | 0.000000e+000 | |

45000 | 0/1 | 2.917219 | 0.000000e+000 | |

50000 | 0/1 | 3.510022 | 0.000000e+000 |

Function | Dim | NI/NG | cputime | GF |
---|---|---|---|---|

1 | 3000 | 58/220 | 2.043613 | 9.947840e-006 |

5000 | 24/97 | 2.496016 | 9.754454e-006 | |

30000 | 29/141 | 109.668703 | 9.705424e-006 | |

45000 | 13/66 | 118.108357 | 9.450575e-006 | |

50000 | 10/51 | 112.383120 | 9.221806e-006 | |

2 | 3000 | 48/95 | 1.138807 | 8.647042e-006 |

5000 | 46/91 | 2.932819 | 9.736889e-006 | |

30000 | 41/81 | 78.733705 | 9.983531e-006 | |

45000 | 40/79 | 181.709965 | 9.632281e-006 | |

50000 | 40/79 | 212.832164 | 9.121412e-006 | |

3 | 3000 | 11/12 | 0.171601 | 1.012266e-008 |

5000 | 11/12 | 0.530403 | 8.539532e-009 | |

30000 | 23/38 | 39.749055 | 2.574915e-009 | |

45000 | 26/44 | 100.542645 | 2.931611e-010 | |

50000 | 26/44 | 123.864794 | 2.838473e-010 | |

4 | 3000 | 104/208 | 2.246414 | 9.243312e-006 |

5000 | 106/212 | 6.193240 | 9.130520e-006 | |

30000 | 113/226 | 219.821009 | 8.747379e-006 | |

45000 | 114/228 | 487.908728 | 9.368026e-006 | |

50000 | 114/228 | 611.976323 | 9.874918e-006 | |

5 | 3000 | 35/53 | 0.561604 | 2.164559e-006 |

5000 | 35/53 | 1.716011 | 1.291210e-006 | |

30000 | 35/53 | 55.926358 | 1.336971e-006 | |

45000 | 33/49 | 116.361146 | 2.109293e-006 | |

50000 | 33/49 | 147.452145 | 2.225071e-006 | |

6 | 3000 | 1/2 | 0.031200 | 0.000000e+000 |

5000 | 1/2 | 0.062400 | 0.000000e+000 | |

30000 | 1/2 | 1.965613 | 0.000000e+000 | |

45000 | 1/2 | 4.290028 | 0.000000e+000 | |

50000 | 1/2 | 5.257234 | 0.000000e+000 | |

7 | 3000 | 40/80 | 0.904806 | 9.908999e-006 |

5000 | 39/78 | 2.386815 | 9.198351e-006 | |

30000 | 34/68 | 66.440826 | 9.515010e-006 | |

45000 | 33/66 | 140.026498 | 9.366998e-006 | |

50000 | 33/66 | 173.597913 | 8.886013e-006 | |

8 | 3000 | 0/1 | 0.015600 | 0.000000e+000 |

5000 | 0/1 | 0.031200 | 0.000000e+000 | |

30000 | 0/1 | 1.279208 | 0.000000e+000 | |

45000 | 0/1 | 2.808018 | 0.000000e+000 | |

50000 | 0/1 | 3.432022 | 0.000000e+000 |

By Tables

We use the tool of Dolan and Morè [

From the above two tables and three figures, we see that Algorithm 5.1 is effective and competitive for solving large-scale nonlinear equations.

(i) This paper provides the first new algorithm based on the first modified PRP conjugate gradient method in Sections 1–4. The _{k} formula of the method includes the gradient value and function value. The global convergence of the algorithm is established under some suitable conditions. The trust region property and sufficient descent property of the method have been proved without the use of any line search method. For some test functions, the numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems.

(ii) The second new algorithm based on the second modified PRP conjugate gradient method is presented in Sections 5-7. The new algorithm has global convergence under suitable conditions. The trust region property and the sufficient descent property of the method are proved without the use of any line search method. The numerical results of some tests function are demonstrated. The numerical results show that the second algorithm is very effective for solving large-scale nonlinear equations.

This work is supported by China NSF (Grant No. 11261006 and 11161003), NSFC No. 61232016, NSFC No. U1405254, the Guangxi Science Fund for Distinguished Young Scholars (No. 2015GXNSFGA139001) and PAPD issue of Jiangsu advantages discipline. The authors wish to thank the editor and the referees for their useful suggestions and comments which greatly improve this paper.