Translate this page into:
On the global convergence of a fast Halley’s family to solve nonlinear equations
⁎Corresponding author. h.bennis@umi.ac.ma (Hamid Bennis)
-
Received: ,
Accepted: ,
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.
Peer review under responsibility of King Saud University.
Abstract
The purpose of this paper is to suggest an approach for increasing the convergence speed of Halley’s method to solve a non-linear equation. This approach is based on the second order Taylor polynomial and on Halley’s formula. By applying it a certain number of times, we obtain a new family of methods. The originality of this family is manifested in the fact that all its sequences are generated from one exceptional formula that depends on a natural integer parameter . In addition, under certain conditions, the convergence speed of its sequences increases with . The convergence analysis shows that the order of convergence of all proposed methods is three. A study on their global convergence is carried out. To illustrate the performance of this family, several numerical comparisons are made with other third and higher order methods.
Keywords
Iterative methods
Root finding
Newton's method
Order of convergence
Third order method
1 Introduction
One of the most encountered problems in science and engineering is solving nonlinear equation
The second order Newton’s method (Traub, 1964) is also well known. In order to increase the convergence speed, new algorithms have been developed: Halley, super-Halley, Chebyshev, Euler, Chun, Sharma (Sharma et al. (2012)) and Dubeau (2013) have proposed some third order methods. Ghanbari (2011), Fang et al. (2008) Solaiman and Hashim (2019), Noor et al. (2007), Chun and Ham (2007), Kou and Li (2007), Wang and Zhang (2014), proposed families of higher-order methods. Zhou and Zhang (2020) have constructed some interesting algorithms with variable convergence rate ((1 + 2p)-order). Zhang (2020) has recently elaborated a fully derivative-free conjugate residual method, using secant condition.
In this paper, based on Halley's method and Taylor polynomial, we construct an interesting family to find simple roots of nonlinear equations with cubical convergence. The originality of this family is manifested in its special formula which depends on a natural integer parameter , and in the augmentation of the convergence speed of its sequences with the increase in , if some hypotheses are satisfied.
The rest of this article is organized as follow: Section 2 features the family’s derivation of Halley’s method, Section 3 provides the convergence study of the new methods, the advantages of the new family of this article are presented in Section 4. The numerical results of this work are provided in Section 5 while the last section gives our conclusion.
2 Family’s derivation of Halley's method
Among the famous third order methods, we quote Halley's method
, given by
The second-order Taylor polynomial of at is given by:
The goal is to find a point (
, where the curve of
passes through the
axis (Scavo and Thoo, 1995), which is the solution of
simplifying the above yields
Eq. (4) is an implicit scheme because it does not allow to directly explain
as a function of
. In order to make it explicit, we replace
placed on the right side of (4) by Halley’s method
(3), we get the Super-Halley’s method
:
By repeating the above procedure
times and each time replace
located on the right side of (4) with the last correction found, we derive a following general family of Halley’s method
:
Let p be a natural integer parameter and a real function sufficiently smooth in some neighborhood of zero, . The family of Halley’s method , defined by the sequences (6), can be expressed in the following form:
Let and be defined by the sequences given by (6) and (7) respectively. We will prove by induction that, for all , .
If , the formula (6) leads to the (5) one given by:
=
Furthermore, according to (7), we have
then
So
Now, we assume that, for a given , we have , we will show that .
From (7), we have: where and, from (6), we have:
As then
So where
Consequently and the induction is completed.
Now, let's try to find the general expression of the polynomial as a function of .
From (7), we obtain
where
is integer part of
and
Thus,
We deduce that
knowing that
for
, we find
, so
We admit that:
By using (8)–(10) and by applying the same method, we obtain:
For and
For and and by conjecture, we obtain for :
Let p be a natural integer parameter and a real function sufficiently smooth in some neighborhood of zero, . The family of Halley’s method , defined by the sequences (7), can be expressed in the following explicit form:
From (7), we have
From (11), we obtain the same result:
Now, we assume that, for a given , and given by (11) is equal to the one defined by (7). We will show that is also the same.
From (7), we have:
Furthermore, according (11), we have
If is even, we have
So
As and , then
Consequently
The case odd is analogous. We conclude, by induction, that (7) is expressed by (11) for all
The scheme (11) is powerful because it regenerates the Halley’s method
, the Super-Halley method
, and several new methods such as
and
given by
3 Convergence study of new methods
3.1 Order of convergence
Let be a real function. Assuming that is sufficiently smooth in some neighborhood of a simple zero Further, assume that the initial value is sufficiently close to . Then the iteration process defined by (11) converges cubically to .
According to Sharma et al. (2012), the iteration process defined by:
From (11), for all we have and for all
So
Consequently, the sequences defined by (11) are cubically convergent for all
3.2 Global convergence
The following lemmas will be useful for the future.
3.2.1 Important lemmas
Let We assume that the polynomial , defined in (11), admits real roots of which is the smallest. Then, the root is strictly positive and the polynomial is strictly positive over the interval
Let we have
As for all , we have
Then
We also have
and where is the smallest root of . As, in addition, the function is continuous over , then for all we have
This end the proof of Lemma 1.
In lemma 1, we have assumed that, for a given , the function admits at least one real root. Now we will show that this is always true.
The polynomials , defined by (11) for different values of non-zero natural integer , each admit at least one real root, and the sequence , constituted by their smallest positive real roots, is strictly decreasing.
By induction, we have
Then
So
Let , we suppose that for every , we have
We will show that
From (7), we have and, from lemma 1, we have for all
So
Since
then
As
then
Furthermore, we have
So
As the function is continuous on , then, from Intermediate value theorem, there exists such as
As is the smallest real root of , then
So
This end the proof of Lemma 2.
In order to study the global convergence of the sequences (11), we must calculate the derivative of the iterative function of and study its sign.
Let The iterative function of relative to the sequence (11) is given by:
The derivative
is given by:
and
Proof of lemma 3. Let
From (14) and (15) we have
Using (23) and (24), the formula (22) become
Using
given in (11), we obtain
Replacing (30) in (29), we obtain (21). Furthermore, we have: where
Using given in (11), we obtain (19).
Developing the expression of
given by (27), we obtain
We note that . So, by using (18) and (31), we obtain (17).
This end the proof of Lemma 3.
The formulas (16)–(21) are of great importance because they give the derivatives for all the methods of the family (11). In the literature, these derivatives are known only for the two following cases (Ezquerro and Hernández (1997) in theorem 6.2):
-
Halley's method
-
Super Halley’s method
A simple calculation allows us to verify that our formulas give exactly the same expressions.
Now Let's look the sign of that we will use later.
The polynomials , defined in (20) and (21) for different values of the natural integer p, are all strictly positive on , where are the smallest real roots of the polynomials .
Let We have for all :
So,
Using (15), we have where and
By deriving given above, we find:
Then, developing the calculations, we obtain
Consequently, for all and for all ,
This end the proof of Lemma 4.
Now, we present a study on the global convergence of the family’s methods (Hernández (1988); Ezquerro and Hernández (1997)).
3.3 Monotonic convergence of new sequences
Let
,
and
are defined by (20), (21) and (15). We consider the function
defined by:
Let and
On an interval containing the root of . The sequence (11) is decreasing (resp. increasing) and converges to from any point checking ).
Let on containing . Let's look for the condition on for convergence to be monotonous.
If
The mean Value Theorem gives
Using (17) and lemma 4, we obtain
Thus, if the condition (37) is satisfied for all , then we have: for all especially
Since , then, from (39), we obtain
By induction, we obtain that, for all
Furthermore, from (11), we have
From lemma 1, the function is strictly positive on . As
Then,
So
By induction we obtain for all
Thereby, the sequence (11) is decreasing and converges to a limit , so
Thus
Since, from lemma 1, we have then so
As is the unique root of then
This end the proof of theorem 3.
Let on an interval containing the root of . The sequence (11) is decreasing (resp. increasing) and converges to from any point checking (rep. ).
For , we have
So
As, by hypothesis then
By applying theorem 3, we obtain the thesis.
The formula (37) is of great importance because it gives the necessary conditions on to ensure the monotonous convergence of all the methods of the new Halley’s family (11). In the literature, these conditions are known only for the cases of Halley’s method and Super Halley’s method (Hernández (1988) in theorem (i)).
3.3.1 Convergence of the methods
Here, we treat the case where the convergence of (11) is ensured in any form: monotonic convergence, oscillating convergence or non-regular oscillation (between two successive iterations, it sometimes there is oscillation, sometimes no). This case is guaranteed if for all we have
We consider the functions
and
defined on
by:
Where is defined in (15), are given in (18) and (20).
Let , on an interval containing the root of , and
Let , on containing . We treat the case where By the Mean Value Theorem, we have:
Where . From (16), we can prove that if the condition (42) is satisfied then for all we have
As then for all we obtain
Thus, there exists such that, for all we have
So
By induction, we get, for all
In addition, since it follows that, for all and therefore, the sequence (11) converges to .
4 Advantages of the new family
It is interesting to study the variation of the convergence speed of the sequences (11) as a function of the parameter . For this, we will compare
Let be defined respectively by given by (11). We have:
We have
Using (7), it follows that
So and (43) is completed.
Let on an interval containing the root . Starting from the same initial point , the convergence’s rate of the sequence given by (11) is higher than the one of
By induction, Let and we assume that the assumptions of theorem 5 are verified. If
From corollary 2 and lemma 2, if, for all we have
then the sequence are decreasing and converge to from any point .
Since then
We have and from lemma 1, for all , we have
So
We assume that
Since, is increasing on , we get
In addition, we have:
So thus
This end the proof of theorem 5.
The theorem 5 announces a result of high importance: under some conditions, the convergence speed of the methods improves if the parameter increases. Thus, since can take high values, then the convergence speed can always be improved with . As the methods of Halley and Super Halley are obtained for , then their rate’s convergence will be lower than the one of the other methods of our family.
5 Numerical results
Numerical computations reported here were carried out in MATLAB R2015b and the stopping criterion was taken as
5.1 Numerical Comparison between some methods of new family
We consider and we take The conditions of theorem 5 are satisfied for our methods given by (11) for . In Table 1, we note that:
-
All sequences are increasing and converge to the zero ;
-
The convergence speed of methods increases with the parameter ;
-
Our methods converge more rapidly than Halley’s method .
B6 | |||||
---|---|---|---|---|---|
2.1 | 2.1 | 2.1 | 2.1 | 2.1 | |
3.608727895037079 | 3.58200315543224 | 3.91411497158407 | 3.96485646880613 | 3.977250615324 | |
3.977250615324702 | 3.99239178714579 | 3.99999975520541 | 3.99999999999823 | 4.0 | |
3.999988994594171 | 3.99999999674939 | 4.0 | 4.0 | – | |
4.0 | 4.0 | – | – | – |
This example confirms the importance of the theorem 5.
5.2 Comparison with other methods
The tests functions used in Table 3 are given in Table 2.
Test functions
Root (
)
Test functions
Root (
)
=
3.00000000000000
1.000000000000000
= 1+
2.947530902542285
0.714805912362777
3.057103549994738
3.000000000000000
=
1.85792082915019
0.5235987755982989
2.000000000000000
0.910007572488709
−0.458962267536948
0.5903344367965851
Comparison with third order methods
Comparison with higher order methods
NI
NOFE
N
S
U
J
C
W
R
5
7
4
5
5
4
2
2
5
12
21
12
9
6
2.55
6
5
5
5
4
3
3
2.55
12
15
12
9
9
2.45
6
7
5
7
4
3
3
2.6
12
18
12
9
9
1.3
4
8
8
7
4
3
3
1.4
12
18
12
9
9
0.5
6
6
5
7
4
3
3
0.26
12
12
12
9
9
0.4
6
5
5
5
4
3
3
0.58
12
18
12
9
9
2.7
6
5
5
5
4
3
3
0.4
12
12
12
9
9
1.2
5
9
5
5
4
3
3
2.72
12
12
12
9
9
0.5
6
5
6
5
4
3
3
−0.1
12
9
12
9
9
We indicate the number of iterations (NI) and the number of function evaluations (NOFE) required to meet the shutdown criterion.
On the left side of the Table 3, we compare our methods and given by (11) for and , with Newton’s method (N) defined by (1) (Sharma et al. (2012)) and some cubically convergent methods: Sharma (S) defined by (17) Jiang-Han’s method (J) defined by (19) in Sharma et al. (2012), Chun’s method (U) defined by (23) in Chun (2007), Halley’s method ( ) defined by (2.3) before.
On the right side of the same Table, we compare our methods and , given by (11) for and , with some higher order methods: Wang and Zhang (2014) (W) defined by (19) (γ = β = −0.6), a fourth-order method; Chun and Ham (2007) (C) defined by ((10)–(12)), a sixth-order methods; Noor et al. (2007) (R) defined by (Algorithm 2.4), a fifth-order method.
The results obtained for our methods are similar or better than those of other third and higher order methods, as they require the same number of iterations/number of function evaluations or less. These results are promising and show the efficiency and speed of the new family.
6 Conclusion
In this paper, we constructed a new Halley’s family of third order iterative techniques for solving nonlinear equations with simple roots. The originality of this family lies first in the fact that these sequences are governed by one exceptional formula depending on a natural integer parameter , and then, in the case where certain conditions are met, the convergence speed of its methods improves when the value of increases. In addition, a study on the global convergence of the new methods has been carried out. Finally, the performance of our methods is compared with some methods of similar or higher order. The numerical results showed the robustness, efficiency and speed of the proposed family's techniques.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References
- A one-parameter family of third-order methods to solve nonlinear equations. Appl. Math. Comput.. 2007;189(1):126-130.
- [Google Scholar]
- Some sixth-order variants of Ostrowski root-finding methods. Appl. Math. Comput.. 2007;193(2):389-394.
- [Google Scholar]
- On comparisons of Chebyshev-Halley iteration functions based on their asymptotic constants. Int. J. Pure and Appl. Math.. 2013;85(5):965-981.
- [CrossRef] [Google Scholar]
- Different acceleration procedures of Newton’s method. Novi Sad J. Math.. 1997;27(1):1-17.
- [Google Scholar]
- An efficient Newton-type method with fifth-order convergence for nonlinear equations. Comput. Appl. Math.. 2008;27(3):269-274.
- [Google Scholar]
- A new general fourth-order family of methods for finding simple roots of nonlinear equations. J. King Saud Univ.-Sci.. 2011;23(4):395-398.
- [Google Scholar]
- An improvement of the Jarratt’s method. Appl. Math. Comput.. 2007;189(2):1816-1821.
- [CrossRef] [Google Scholar]
- A new modified Halley method without second derivatives for nonlinear equation. Appl. Math. Comput.. 2007;189(2):1268-1273.
- [Google Scholar]
- On the geometry of Halley’s method. Am. Math. Mon.. 1995;102(5):417-426.
- [CrossRef] [Google Scholar]
- A unified approach to generate weighted Newton third order methods for solving nonlinear equations. J. Num. Math. Stochastics. 2012;4(1):48-58.
- [Google Scholar]
- Two new efficient sixth order iterative methods for solving nonlinear equations. J. King Saud Univ.-Sci.. 2019;31(4):701-705.
- [CrossRef] [Google Scholar]
- Iterative Methods for the Solution of Equations, Prentice-hall. New Jer-sey: Englewood cliffs; 1964.
- High-order Newton-type iterative methods with memory for solving nonlinear equations. Math. Commun.. 2014;19:91-109.
- [Google Scholar]
- A derivative-free conjugate residual method using secant condition for general large-scale nonlinear equations. Numerical Algorithms. 2020;83(4):1277-1293.
- [Google Scholar]
- A modified Broyden-like quasi-Newton method for nonlinear equations. J. Comput. Appl. Math.. 2020;372:112744.
- [CrossRef] [Google Scholar]