7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
View/Download PDF

Translate this page into:

31 (
4
); 701-705
doi:
10.1016/j.jksus.2018.03.021

Two new efficient sixth order iterative methods for solving nonlinear equations

Preparatory Year Deanship, King Faisal University, 31982 Hofouf, Ahsaa, Saudi Arabia
School of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia

⁎Corresponding author. obadah_sy@yahoo.com (Obadah Said Solaiman),

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Peer review under responsibility of King Saud University.

Abstract

In this paper, we present two new iterative methods, one of them is second derivative free, for solving nonlinear equations. We derive these methods based on the Taylor series expansion and Halley’s method. The convergence analysis of the two methods is discussed. It is established that the new methods have sixth order of convergence. Several numerical examples given show that the new methods are comparable with the well-known existing methods of the same order.

Keywords

41-xx
65-xx
Root finding method
Halley’s method
Nonlinear equations
Iterative methods
Order of convergence
PubMed
1

1 Introduction

One of the most important questions in mathematics is how to find a solution of the nonlinear equation f ( x ) = 0 . The most famous method, implemented by Newton, can be derived from the first two terms of the Taylor series of f ( x )

(1)
f ( x ) = f ( x 0 ) + ( x - x 0 ) f ( x 0 ) . As we want f ( x ) = 0 , solving Eq. (1) for x gives us
(2)
x = x 0 - f ( x 0 ) f ( x 0 ) .
Hence, we can write it in iterative form as
(3)
x n + 1 = x n - f ( x n ) f ( x n ) .
The order of Newton method is 2, as proved by Traub (1964).

A Large number of researchers try to improve Newton’s method in order to get more accuracy and higher order of convergence, see for example Abbasbandy (2003), Chun (2006), He (2003), Medhu and Jayaraman (2016) and the references therein.

If we stop at the third term in the Taylor series we will have

(4)
f ( x ) = f ( x 0 ) + ( x - x 0 ) f ( x 0 ) + ( x - x 0 ) 2 2 f ( x 0 ) . Taking ( x - x 0 ) as a common factor, and as f ( x ) = 0 we obtain
(5)
0 = f ( x 0 ) + ( x - x 0 ) f ( x 0 ) + ( x - x 0 ) 2 f ( x 0 ) .
Reordering Eq. (5) gives
(6)
x = x 0 - f ( x 0 ) f ( x 0 ) + 1 2 ( x - x 0 ) f ( x 0 ) .
From Eq. (2) we have x - x 0 = - f ( x 0 ) f ( x 0 ) , and upon substituting this in Eq. (6) we get
(7)
x = x 0 - 2 f ( x 0 ) f ( x 0 ) 2 ( f ( x 0 ) ) 2 - f ( x 0 ) f ( x 0 ) .
Rewriting (7) in iterative form one obtains
(8)
x n + 1 = x n - 2 f ( x n ) f ( x n ) 2 ( f ( x n ) ) 2 - f ( x n ) f ( x n ) ,
which is a third order iterative method first introduced by Halley (1964).

Recently, Kogan et al. (2017) proved that among all one-point iterative methods without memory of order p, methods of order p = 3 are the most efficient methods.

A lot of researchers try to modify Halley’s method, either by increasing the order of convergence or by reducing the number of functions evaluation in each iteration by writing the new schemes without using the second derivative; as it is not always easy to find the second derivative of the function. Noor and Noor (2007) have implemented a two step Halley’s method using Newton’s method as a predictor and Halley’s method as a corrector. They proved that this two step method has sixth order of convergence. Noor et al. (2007) have modified the method in Noor and Noor (2007) using the finite difference scheme, and established their new algorithm without using of second derivative, and they showed that the implemented scheme is of fifth order of convergence. Hafiz and Al-Goria (2012) have established a two new methods of order seven and nine, based on the weight combination of midpoint with Simpson quadrature formulas and using the predictor-corrector technique. Very recently, Kumar et al. (2017) developed a parameter based family of sixth order iterations free from second derivative for solving nonlinear equations.

Commonly in the literature, the efficiency index is used to compare different iterative methods. This index is defined as p 1 / m , where p is the order of convergence and m is the total number of functional evaluations per iteration.

In this paper, we present two new iterative methods, one of them is free from second derivative. We derive these methods based on the Taylor series expansion of f ( x ) and use of Halley’s method within the expansion. In order to reduce number of functional evaluations at each iteration in the implemented scheme, we approximate the second derivative in the first scheme and construct a new one of the same order with higher efficiency index. Both established techniques are of sixth order of convergence.

The rest of the paper is divided as follows. The new methods are described in Section 2. In Section 3, the convergence analysis is carried out to establish the order of convergence. In Section 4, the methods are tested on some numerical examples, and comparisons of the results of our methods with other well-known methods and methods of the same order are summarized in tables. Finally, the conclusion of the paper is given in Section 5.

2

2 The new methods

2.1

2.1 Method 1

Consider the nonlinear equation of the type f ( x ) = 0 , where f ( x ) is a real function, sufficiently differentiable, defined on a real interval I.

For simplicity, assume that α I is a simple zero for f ( x ) , that is f ( α ) = 0 , and assume that x 0 is an initial guess sufficiently close to α . We obtain from Taylor’s series expansion of the function f ( x ) that

(9)
f ( x 0 ) + ( x - x 0 ) f ( x 0 ) + ( x - x 0 ) 2 2 f ( x 0 ) = 0 . Reordering Eq. (9) gives
(10)
x = x 0 - f ( x 0 ) f ( x 0 ) - ( x - x 0 ) 2 f ( x 0 ) 2 f ( x 0 ) .
Now, from Halley’s method in (7) we have
(11)
x - x 0 = - 2 f ( x 0 ) f ( x 0 ) 2 ( f ( x 0 ) ) 2 - f ( x 0 ) f ( x 0 ) .
After substituting (11) in (10), we obtain
(12)
x = x 0 - f ( x 0 ) f ( x 0 ) - 2 ( f ( x 0 ) ) 2 f ( x 0 ) f ( x 0 ) 4 ( f ( x 0 ) ) 4 - 4 f ( x 0 ) ( f ( x 0 ) ) 2 f ( x 0 ) + ( f ( x 0 ) ) 2 ( f ( x 0 ) ) 2 .
Rewriting Eq. (12) in iterative form as a corrector, with Newton’s method as a predictor gives our main result in this paper, which will be:
Algorithm 2.1

For a given x 0 , compute the approximate solution x n + 1 by the following iterative scheme

(13)
y n = x n - f ( x n ) f ( x n )
(14)
x n + 1 = y n - f ( y n ) f ( y n ) - 2 ( f ( y n ) ) 2 f ( y n ) f ( y n ) 4 ( f ( y n ) ) 4 - 4 f ( y n ) ( f ( y n ) ) 2 f ( y n ) + ( f ( y n ) ) 2 ( f ( y n ) ) 2

Algorithm 2.1 is called modified Halley’s method (MH1) and has sixth order of convergence. At each iteration, (MH1) needs two evaluations of the function, two evaluations of first derivative, and one evaluation of the second derivative.Therefore, the efficiency index of this modified method is ( 6 ) 1 5 1.431 , which is worse than 3 1 3 1.442 of Halley’s method.

2.2

2.2 Method 2

Now, in order to improve the efficiency index of Algorithm 2.1, we approximate the second derivative to reduce number of functional evaluations needed in each iteration. The method thus obtained is free from second derivative. Towards this, we will approximate the second derivative f ( y n ) using a combination of the already known data in the past steps.

Consider the function Q ( t ) = a + b ( t - y n ) + c ( t - y n ) 2 + d ( t - y n ) 3 , where a , b , c , and d are unknowns to be found. With the following interpolation conditions: f ( x n ) = Q ( x n ) , f ( w n ) = Q ( w n ) , f ( x n ) = Q ( x n ) , f ( w n ) = Q ( w n ) , f ( w n ) = Q ( w n ) . A system of four linear equations of four variables is obtained from the above conditions. Solving this system gives

(15)
f ( y n ) = 2 x n - y n 3 f ( x n ) - f ( y n ) x n - y n - 2 f ( y n ) - f ( x n ) = Q ( x n , y n ) Substituting Eq. (15) in Algorithm 2.1, one obtains the modified algorithm:
Algorithm 2.2

For a given x 0 , compute the approximate solution x n + 1 by the following iterative scheme

(16)
y n = x n - f ( x n ) f ( x n )
(17)
x n + 1 = y n - f ( y n ) f ( y n ) - 2 ( f ( y n ) ) 2 f ( y n ) Q ( x n , y n ) 4 ( f ( y n ) ) 4 - 4 f ( y n ) ( f ( y n ) ) 2 Q ( x n , y n ) + ( f ( y n ) ) 2 ( Q ( x n , y n ) ) 2

We call Algorithm 2.2 as the modified Halley’s method (MH2) which has sixth order of convergence. At each iteration, (MH2) needs two evaluations of the function, and two evaluations of first derivative. Therefore, the efficiency index of this modified method is ( 6 ) 1 4 1.565 , which is better than ( 6 ) 1 5 1.431 of (MH1) and 3 1 3 1.442 of Halley’s method.

Remark 2.3

If f ( x ) = 0 , then Algorithm 2.1 reduces to the two step scheme: y n = x n - f ( x n ) f ( x n ) x n + 1 = y n - f ( y n ) f ( y n ) Which is the two step Newton’s method considered by Traub (1964). This technique is of fourth order of convergence. This shows that (MH1) is a generalization of Newton’s two step method.

3

3 Convergence analysis

Now, we will discuss the convergence analysis of Algorithms 2.1 and 2.2.

Theorem 3.1

Let α I be a simple zero of a sufficiently differentiable function f : I R R in an open interval I. If x 0 is sufficiently close to α , then the method defined by Algorithm 2.1 is of sixth order of convergence.

Proof

Consider α is a root of f ( x ) , and let e n = x n - α be the error at the n th iteration. By using Taylor’s series about x = α we have

(18)
f ( x n ) = f ( α ) [ e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + ] , where c k = 1 k ! f ( k ) ( α ) f ( α ) , k = 2 , 3 , . From (18) we have
(19)
f ( x n ) = f ( α ) [ 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + ] .
Then from (18) and (19) we have
(20)
f ( x n ) f ( x n ) = e n - c 2 e n 2 - ( 2 c 3 - 2 c 2 2 ) e n 3 - ( 3 c 4 - 7 c 2 c 3 + 4 c 2 3 ) e n 4 + .
Using (20) we can write y n in Algorithm 2.1 as
(21)
y n = α + c 2 e n 2 + ( 2 c 3 - 2 c 2 2 ) e n 3 + ( 3 c 4 - 7 c 2 c 3 + 4 c 2 3 ) e n 4 + .
Expanding f ( y n ) , f ( y n ) and f ( y n ) about α and using (21) we get
(22)
f ( y n ) = f ( α ) + ( y n - α ) f ( α ) + ( y n - α ) 2 2 f ( α ) + ( y n - α ) 3 6 f ( α ) + = f ( α ) [ c 2 e n 2 + ( 2 c 3 - 2 c 2 2 ) e n 3 + ( 3 c 4 - 7 c 2 c 3 + 5 c 2 3 ) e n 4 + ] ,
(23)
f ( y n ) = f ( α ) + ( y n - α ) f ( α ) + ( y n - α ) 2 2 f ( α ) + ( y n - α ) 3 6 f ( 4 ) ( α ) + = f ( α ) [ 1 + 2 c 2 2 e n 2 + ( 4 c 2 c 3 - 4 c 2 3 ) e n 3 + ( 6 c 2 c 4 - 11 c 2 2 c 3 + 8 c 2 4 ) e n 4 + ] ,
(24)
f ( y n ) = f ( α ) + ( y n - α ) f ( α ) + ( y n - α ) 2 2 f ( 4 ) ( α ) + ( y n - α ) 3 6 f ( 5 ) ( α ) + = f ( α ) [ 2 c 2 + 6 c 2 c 3 e n 2 + ( 12 c 3 2 - 12 c 2 2 c 3 ) e n 3 + ] .
Substituting (22)–(24) in x n + 1 in Algorithm 2.1 we obtain x n + 1 = α - c 2 3 c 3 e n 6 + O ( e n 7 ) , implying that e n + 1 = - c 2 3 c 3 e n 6 + O ( e n 7 ) . Hence, Algorithm 2.1 has at least sixth order of convergence □

Theorem 3.2

Let α I be a simple zero of a sufficiently differentiable function f : I R R in an open interval I. If x 0 is sufficiently close to α , then the method defined by Algorithm 2.2 is of sixth order of convergence.

Proof

With the same assumptions of the previous theorem, we have

(25)
Q ( x n , y n ) = 2 x n - y n [ 3 f ( x n ) - f ( y n ) x n - y n - 2 f ( y n ) - f ( x n ) ] = f ( α ) [ 2 c 2 + 2 ( 3 c 2 c 3 - c 4 ) e n 2 - 4 ( 3 c 2 2 c 3 - 3 c 3 2 - c 2 c 4 + c 5 ) e n 3 + ] . Substituting (22), (23) and (25) in x n + 1  in Algorithm 2.2 we obtain x n + 1 = α + c 2 2 ( - c 2 c 3 + c 4 ) e n 6 + O ( e n 7 ) , which implies that e n + 1 = c 2 2 ( - c 2 c 3 + c 4 ) e n 6 + O ( e n 7 ) . This shows that Algorithm 2.2 has at least sixth order of convergence □

4

4 Numerical examples

In this section we present seven different examples to show the efficiency of the new methods (14) and (17). We compare the new methods with the second order Newton’s method, third order Halley’s method, sixth order (NR) method proposed by Noor and Noor (2007) and (PCH) method of order six presented by Noor et al. (2007). To do so, we consider the following seven test examples: f 1 ( x ) = x 2 - e x - 3 x + 2 , f 2 ( x ) = ( x - 1 ) 3 - 1 , f 3 ( x ) = x 3 - 10 , f 4 ( x ) = cos ( x ) - x , f 5 ( x ) = sin 2 ( x ) - x 2 + 1 , f 6 ( x ) = e ( x 2 + 7 x - 30 ) - 1 , f 7 ( x ) = x e x 2 - sin 2 ( x ) + 3 cos ( x ) + 5 We take = 10 - 15 in the following stopping criterium of the computer programs x n - x n - 1 < . The computations here have been carried out using Mathematica 9 with 10,000 significant digits.

Table 1 shows the number of iterations n such that the stopping criterium is satisfied, the approximate zero x n , the distance between two consecutive approximations for the zero with x n - x n - 1 < 10 - 15 , the value of the approximate zero after the required iterations f ( x n ) , and the computational order of convergence (COC) which can be approximated using the formula COC ln ( x n + 1 - x n ) / ( x n - x n - 1 ) ln ( x n - x n - 1 ) / ( x n - 1 - x n - 2 ) , firstly introduced by Weerakoon and Fernando (2000).

Table 1 Comparisons between different methods.
Method n x n x n - x n - 1 f ( x n ) COC
f 1 ( x ) , x 0 = 2
Newton 6 0.257530285439860760455 9.1 E - 28 2.93 E - 55 2
Halley 5 0.257530285439860760455 - 1.51 E - 29 6.27 E - 88 3
NR 4 0.257530285439860760455 - 3.42 E - 93 - 2.38 E - 559 6
PCH 4 0.257530285439860760455 - 3.81 E - 94 - 3.72 E - 565 6
MH1 4 0.257530285439860760455 - 2.44 E - 92 - 3.72 E - 554 6
MH2 3 0.257530285439860760455 - 8.73 E - 22 - 2.85 E - 130 6
f 2 ( x ) , x 0 = 2.5
Newton 7 2 - 1.29 E - 28 5.03 E - 56 2
Halley 5 2 - 6.66 E - 41 5.91 E - 121 3
NR 3 2 - 1.8 E - 17 6.86 E - 101 6
PCH 4 2 - 3.75 E - 87 1.39 E - 518 6
MH1 3 2 9.62 E - 22 - 7.91 E - 127 6
MH2 3 2 9.62 E - 22 - 7.91 E - 127 6
f 3 ( x ) , x 0 = 2
Newton 5 2.15443469003188372176 - 2.26 E - 18 3.29 E - 35 2
Halley 4 2.15443469003188372176 3.61 E - 33 - 9.41 E - 98 3
NR 3 2.15443469003188372176 - 4.5 E - 42 1.67 E - 249 6
PCH 3 2.15443469003188372176 - 2.61 E - 39 1.59 E - 232 6
MH1 3 2.15443469003188372176 3.01 E - 44 - 7.51 E - 263 6
MH2 3 2.15443469003188372176 3.01 E - 44 - 7.51 E - 263 6
f 4 ( x ) , x 0 = 1.7
Newton 5 0.739085133215160641655 - 2.34 E - 16 - 2.03 E - 32 2
Halley 5 0.739085133215160641655 - 2.25 E - 44 - 2.22 E - 132 3
NR 3 0.739085133215160641655 - 5.72 E - 34 - 7.33 E - 203 6
PCH 3 0.739085133215160641655 - 5.53 E - 33 - 8.44 E - 197 6
MH1 3 0.739085133215160641655 - 1.89 E - 35 - 5.49 E - 212 6
MH2 3 0.739085133215160641655 1.68 E - 34 6.67 E - 207 6
f 5 ( x ) , x 0 = 1
Newton 7 1.40449164821534122604 - 7.33 E - 26 - 1.04 E - 50 2
Halley 5 1.40449164821534122604 1.02 E - 38 1.38 E - 114 3
NR 4 1.40449164821534122604 - 6.66 E - 86 - 5.49 E - 512 6
PCH 4 1.40449164821534122604 - 4.7 E - 76 - 1.47 E - 452 6
MH1 3 1.40449164821534122604 9.04 E - 19 5.71 E - 110 6
MH2 3 1.40449164821534122604 1.67 E - 19 6.53 E - 114 6
f 6 ( x ) , x 0 = 3.5
Newton 13 3 - 4.21 E - 25 1.52 E - 47 2
Halley 7 3 - 5.19 E - 16 2.56 E - 44 3
NR 5 3 - 9.94 E - 18 5.03 E - 98 6
PCH 6 3 - 1.37 E - 38 1.43 E - 222 6
MH1 5 3 3.54 E - 56 - 2.12 E - 328 6
MH2 5 3 1.22 E - 27 - 1.71 E - 157 6
f 7 ( x ) , x 0 = - 2
Newton 9 - 1.20764782713091892701 2.73 E - 21 - 2.27 E - 40 2
Halley 5 - 1.20764782713091892701 3.12 E - 17 - 1.56 E - 49 3
NR 4 - 1.20764782713091892701 1.77 E - 31 - 5.37 E - 184 6
PCH 4 - 1.20764782713091892701 5.54 E - 17 - 4.99 E - 96 6
MH1 4 - 1.20764782713091892701 - 3.74 E - 39 3.76 E - 229 6
MH2 4 - 1.20764782713091892701 - 2.12 E - 35 3.1 E - 207 6

The second column in Table 1 shows the number of iterations n needed to reach the stopping criterium. It is clear that the new methods need less iterations than the other methods to reach the stopping criteria, or the same number of iterations in some cases when comparing with (NR) and (PCH) methods. Therefore, the approximate solutions obtained by the two new methods are comparable with the well-known existing methods.

Table 2 illustrates the number of iterations needed to obtain an approximation of the solution using the stopping criterium x n - x n - 1 < 10 - 200 . Setting the same convergence criterium for all methods, the number of iterations required for the new methods is either less than or equal the number of iterations needed by the other methods of the same order.

Table 2 Comparisons of number of iterations needed for different methods such that x n - x n - 1 < 10 - 200 .
f 1 ( x ) f 2 ( x ) f 3 ( x ) f 4 ( x ) f 5 ( x ) f 6 ( x ) f 7 ( x )
x 0 = 2 x 0 = 2.5 x 0 = 2 x 0 = 1.7 x 0 = 1 x 0 = 3.5 x 0 = - 2
Newton 9 10 9 9 10 17 13
Halley 7 7 6 7 7 10 8
NR 5 5 4 4 5 7 6
PCH 5 5 4 5 5 7 6
MH1 5 5 4 4 5 6 5
MH2 5 5 4 4 5 7 5

5

5 Conclusion

In this paper we implemented two new efficient root finding methods to solve nonlinear equations. The first one was derived using Taylor’s expansion together with Halley’s method, and in the second one we used some approximations for the second derivative to increase the efficiency index of the first method. Overall, the two implemented methods are comparable to other well-known methods of the same order. The convergence of the two algorithms has been proved, and the convergence order has been established to be of the sixth order. Seven examples were tested, showing the capability of the methods.

References

  1. , . Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl. Math. Comput.. 2003;145:887-893.
    [Google Scholar]
  2. , . A new iterative method for solving nonlinear equations. Appl. Math. Comput.. 2006;187:415-422.
    [Google Scholar]
  3. , , . New ninth and seventh order methods for solving nonlinear equations. Eur. Sci. J.. 2012;8(27):83-95.
    [Google Scholar]
  4. , . A new, exact and easy method for finding the roots of equations generally, and that without any previous reduction. Philos. R. Soc. London. 1964;18:136-148.
    [Google Scholar]
  5. , . A new iterative method for solving algebraic equations. Appl. Math. Comput.. 2003;135:81-84.
    [Google Scholar]
  6. , , , , . To the question of efficiency of iterative methods. Appl. Math. Lett.. 2017;66:40-46.
    [Google Scholar]
  7. , , , , , . A family of higher order iterations free from second derivative for nonlinear equations in R . J. Comput. Appl. Math. 2017
    [CrossRef] [Google Scholar]
  8. , , . Higher order methods for nonlinear equations and their basins of attraction. Mathematics. 2016;4(2):22.
    [CrossRef] [Google Scholar]
  9. , , . Predictor-corrector Halley method for nonlinear equations. Appl. Math. Comput.. 2007;188:1587-1591.
    [Google Scholar]
  10. , , , . A new modified Halley method without second derivatives for nonlinear equation. Appl. Math. Comput.. 2007;189:1268-1273.
    [Google Scholar]
  11. , , , . Modified Householder iterative method for nonlinear equations. Appl. Math. Comput.. 2007;190:1534-1539.
    [Google Scholar]
  12. , . Iterative Methods for Solution of Equations. Englewood, Cliffs, NJ: Prentice-Hall; .
  13. , , . A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett.. 2000;13:87-93.
    [Google Scholar]
Show Sections