7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
View/Download PDF

Translate this page into:

32 (
1
); 312-323
doi:
10.1016/j.jksus.2018.05.006

Three iterative methods for solving second order nonlinear ODEs arising in physics

Department of Mathematics, College of Education for Pure Sciences (Ibn AL-Haitham), University of Baghdad, Baghdad, Iraq

⁎Corresponding author. majeed.a.w@ihcoedu.uobaghdad.edu.iq (M.A. Al-Jawary)

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Peer review under responsibility of King Saud University.

Abstract

In this work, three iterative methods have been implemented to solve several second order nonlinear ODEs that arising in physics. The proposed iterative methods are Tamimi-Ansari method (TAM), Daftardar-Jafari method (DJM) and Banach contraction method (BCM). Each method does not require any assumption to deal with nonlinear term. The obtained results are compared numerically with other numerical methods such as the Runge-Kutta 4 (RK4) and Euler methods. In addition, the convergence of the proposed methods is given based on the Banach fixed point theorem. The results of the maximal error remainder values show that the present methods are effective and reliable. The software used for the calculations in this study was Mathematica®10.

Keywords

Iterative methods
Approximate solution
Numerical solution
Runge-Kutta 4 method
Euler method
1

1 Introduction

The differential equations have many applications in science and engineering, especially in problems that have the form of nonlinear equations. The applications of nonlinear ordinary differential equations by mathematical scientists and researchers have become more important and interesting. It has been known that these equations describe different types of phenomena such as modeling of dynamics, heat conduction, diffusion, acoustic waves, transport and many others.

The iterative methods are often used to get the approximate solutions for the different nonlinear problems. A new iterative method has been presented in 2011 by Temimi and Ansari (TAM) (Temimi and Ansari, 2011) for solving nonlinear problems. The TAM was inspired from the homotopy analysis method (HAM) (Liao and Chwang, 1998), and used to solve several ODEs (Temimi and Ansari, 2015), PDEs and KdV equations (Ehsani et al., 2013); differential algebraic equations (DAEs) (AL-Jawary and Hatif, 2017); Duffing equations (AL-Jawary and Al-Razaq, 2016); some chemical problems (AL-Jawary and Raham, 2017), thin film flow problem (AL-Jawary, 2017) and Fokker-Planck’s equations (AL-Jawary et al., 2017).

The other proposed method has been proposed in 2006 by Daftardar-Gejji and Jafari, (DJM) (Daftardar-Gejji and Jafari, 2006) to solve nonlinear equations. Also, this method has been used to solve different equations such as fractional differential equations (Daftardar-Gejji and Bhalekar, 2008), partial differential equations (Bhalekar and Daftardar-Gejji, 2008); Volterra integro-differential equations and some applications for the Lane-Emden equations (AL-Jawary and AL-Qaissy, 2015), evolution equations (Bhalekar and Daftardar-Gejji, 2010). This method presented a proper solution which converges to the exact solution “if such solution exists” through successive approximations.

The other iterative method depends on the Banach contraction principle (BCP) (Daftardar-Gejji and Bhalekar, 2009) where it is another iterative method considered as the main source of the metric fixed point theory. The Banach contraction principle also known to be Banach's fixed point theorem (BFPT) has been used to solve various kinds of differential and integral equations (Joshi and Bose, 1985).

In this paper, the TAM, DJM, and BCM will be implemented to solve the nonlinear second order ODEs that arising in physics to get an approximate solutions. These solutions will be compared numerically with another results obtained by the Runge-Kutta and Euler methods to show the validity and the efficiency of the proposed iterative methods. The convergence for these presented methods is also discussed.

This paper is organized as follows: Section 2 reviews the basic ideas of the proposed iterative methods. Section 3 presents the convergence of the proposed methods. Section 4 illustrates the approximate solutions and the numerical simulation for several test problems. Finally, the conclusion has been given in section 5.

2

2 The basic concepts of three iterative methods

Iterative method is a mathematical procedure generates a sequence of improved approximate solutions for a class of problems. The iterative method leads to the production of an approximate solution which converges to the exact solution when the corresponding sequence is convergent at some given initial approximations.

Let us introduce the following nonlinear differential equation:

(1)
L ( y ( x ) ) + N ( y ( x ) ) + g ( x ) = 0 , with the boundary conditions
(2)
B y , dy dx = 0 , x D
where x represents the independent variable, y ( x ) is the unknown function, g ( x ) is a given known function, L ( · ) = d 2 dx 2 ( · ) is the linear operator, N ( · ) is the nonlinear operator, B ( · ) is a boundary operator. Now, let us begin by introducing the basic ideas of the three iterative methods.

2.1

2.1 The basic idea for the TAM

We first begin by assuming that y 0 ( x ) is an initial guess to solve the problem y ( x ) and the solution begins by solving the following initial value problem (Temimi and Ansari, 2011):

(3)
L ( y 0 ( x ) ) + g ( x ) = 0 , and B y 0 , dy 0 dx = 0 . The next approximate solutions are obtained by solving the following problems
(4)
L ( y 1 ( x ) ) + g ( x ) + N ( y 0 ( x ) ) = 0 , and B y 1 , dy 1 dx = 0 .
and thus we have a simple iterative procedure which is the solution of a set of problems i.e.,
(5)
L ( y n + 1 ( x ) ) + g ( x ) + N ( y n ( x ) ) = 0 , and B y n + 1 , dy n + 1 dx = 0 .

Then, the solution for the problem (1) with (2) is given by

(6)
y = lim n y n .

2.2

2.2 The basic idea for the DJM

Let us apply the inverse operator L - 1 ( · ) = 0 x 0 x ( ) d τ d τ to the nonlinear problem presented by (1) and (2) we have

(7)
y ( x ) = f ( x ) + 0 x 0 x N ( y ( τ ) ) d τ d τ , and by reducing the integration from double to single (Wazwaz, 2015); we get the following form
(8)
y ( x ) = f ( x ) + 0 x ( x - τ ) N ( y ( τ ) ) d τ ,
where f is a known analytic function represents the sum of the available initial conditions and the result of integrating of the function g (if such function is available).

The solution y for Eq. (8) can be given by the following series (Daftardar-Gejji and Jafari, 2006):

(9)
y = i = 0 y i Now, the following can be defined
(10)
G 0 = N ( y 0 ) ,
G m = N ( i = 0 m y i ) - N ( i = 0 m - 1 y i ) , m 1 So, that N ( y ) can decomposed as N ( i = 0 y i ) = N ( y 0 ) G 0 + [ N ( y 0 + y 1 ) - N ( y 0 ) ] G 1 + [ N ( y 0 + y 1 + y 2 ) - N ( y 0 + y 1 ) ] G 2 + [ N ( y 0 + y 1 + y 2 + y 3 ) - N ( y 0 + y 1 + y 2 ) ] G 3 + . Moreover, the relation is defined with recurrence so that
(11)
y 0 = f ,
(12)
y 1 = L ( y 0 ) + G 0 ,
(13)
y m + 1 = L ( y m ) + G m , m 1
Since L represents a linear operator i = 0 m L ( y i ) = L ( i = 0 m y i ) , we may write i = 1 m + 1 y i = i = 0 m L ( y i ) + N ( i = 0 m y i ) = L ( i = 0 m y i ) + N ( i = 0 m y i ) , m 1 So that, i = 0 y i = f + L ( i = 0 y i ) + N ( i = 0 y i ) and the approximate solution will be given in y n = i = 0 n y i , n N .

2.3

2.3 The basic idea of the Banach contraction method (BCM)

Consider Eq. (8) as a general functional equation. In order to implement the BCM, we define successive approximations (Daftardar-Gejji and Bhalekar, 2009):

(14)
y 0 = f , y 1 = y 0 + N ( y 0 ) , y 2 = y 0 + N ( y 1 ) , and so on, we will get successive approximations for y n ( x ) in the following generalized form
(15)
y n = y 0 + N ( y n - 1 ) , n N
Therefore, the solution for the relations (14) and (15) will be obtained by
(16)
y = lim n y n .

3

3 The convergence of the proposed iterative methods

In this section, we present the fundamental theorems and concepts for the convergence (Odibat, 2010) of the presented methods.

The iterations occurred by the DJM are straight used to prove the convergence. However, for the convergence proof of the TAM or BCM, the following procedure should be used for handling Eq. (1) with the given conditions (2). So, we have the terms

(17)
v 0 = y 0 ( x ) , v 1 = F [ v 0 ] , v 2 = F [ v 0 + v 1 ] , v n + 1 = F [ v 0 + v 1 + + v n ] . where F represents the following operator
(18)
F [ v k ] = S k - i = 0 k - 1 v i ( x ) , k 1 .
In general, the term S k is the solution for the problem in the form, for the TAM:
(19)
L ( v k ( x ) ) + g ( x ) + N ( i = 0 k - 1 v i ( x ) ) = 0 , k 1 .
For the BCM:
(20)
v k = v 0 + N ( i = 0 k - 1 v i ( x ) ) , k 1 .
By using the same conditions with the intended iterative technique that will be used. Therefore, we get y ( x ) = lim n y n ( x ) = n = 0 v n . Hence, by using Eqs. (17) and (18), the following solution will be obtained in a series form
(21)
y ( x ) = i = 0 v i ( x ) .
According to the recursive algorithms of the proposed methods, the sufficient conditions for convergence of these methods can be given in following theorems.
Theorem 4.1

Let F defined in (18), be an operator from a Hilbert space H to H. The series solution y n ( x ) = i = 0 n v i ( x ) converges if 0 < γ < 1 such that F [ v 0 + v 1 + + v i + 1 ] γ F [ v 0 + v 1 + + v i ] (such that v i + 1 γ v i ) i = 0 , 1 , 2 , .

This theorem is a special case of Banach’s fixed point theorem which is a sufficient condition to study the convergence.

Proof

See (Odibat, 2010). □

Theorem 4.2

If the series solution y ( x ) = i = 0 v i ( x ) is convergent, then this series will represent the exact solution of the current nonlinear problem.

Proof

See (Odibat, 2010). □

Theorem 4.3

Suppose that the series solution i = 0 v i ( x ) presented by (21) is convergent to the solution y ( x ) . If the truncated series i = 0 n v i ( x ) is used as an approximation to the solution of the current problem, then the maximum error E n ( x ) is estimated by

(22)
E n ( x ) 1 1 - γ γ n + 1 v 0 .

Proof

See (Odibat, 2010). □

Theorems 4.1 and 4.2 state that the solutions obtained by one of the presented methods, i.e. the relation (5) (for the TAM), the relation (13) (for the DJM), the relation (15) (for the BCM), or (17), converges to the exact solution under the condition 0 < ξ < 1 such that F [ v 0 + v 1 + + v i + 1 ] ξ F [ v 0 + v 1 + + v i ] (that is v i + 1 ξ v i ) i = 0 , 1 , 2 , . In another meaning, for each i, if we define the parameters

(23)
β i = v i + 1 v i , v i 0 0 , v i = 0 then the series solution i = 0 v i ( x ) for the nonlinear ODE given by (1) will be convergent to the exact solution y ( x ) , when 0 β i < 1 , i = 0 , 1 , 2 , . Also, as in Theorem 4.3, the maximum truncation error is estimated to be y ( x ) - i = 0 n v i 1 1 - β β n + 1 v 0 , where β = max { β i , i = 0 , 1 , , n } .

4

4 Test problems

Problem 1:

The Painlevé equation I can be given by the following form (Hesameddini and Peyrovi, 2009)

(24)
y ( x ) = 6 y 2 ( x ) + x , with the initial conditions: y ( 0 ) = 0 and y ( 0 ) = 1 . This problem will be solved by using the three proposed iterative methods.

Solving the problem 1 by the TAM:

In order to solve Eq. (24) by the TAM, we have the following form

(25)
L ( y ) = y ( x ) , N ( y ) = - 6 y ( x ) and g ( x ) = - x . The initial problem will be:
(26)
L ( y 0 ( x ) ) = x , with y 0 ( 0 ) and y 0 ( 0 ) = 1 .
We can get the next problems from the generalized following relationship
(27)
L ( y n + 1 ( x ) ) + g ( x ) + N ( y n ( x ) ) = 0 , y n + 1 ( 0 ) = a and y n + 1 ( 0 ) = b .
Firstly, to get the zero approximation y 0 ( x ) , the following initial problem must be solved:
(28)
y 0 ( x ) = x ,
By integrating both sides of Eq. (28) twice from 0 to x and substituting the initial conditions y 0 ( 0 ) = 0 and y 0 ( 0 ) = 1 , we get y 0 ( x ) = x + x 3 6 , In a similar way, the rest of the other iterations can be carried out, the first iteration can be obtained by calculating
(29)
y 1 ( x ) = 6 y 0 2 ( x ) + x , with y 1 ( 0 ) = 0 and y 1 ( 0 ) = 1 ,
Then, the approximate solution for Eq. (29) will be then: y 1 ( x ) = x + x 3 6 + x 4 2 + x 6 15 + x 8 336 + , The second iteration y 2 ( x ) can be obtained from solving the following
(30)
y 2 ( x ) = 6 y 1 2 ( x ) + x , with y 2 ( 0 ) = 0 and y 2 ( 0 ) = 1 ,
Then, by solving Eq. (30), we obtain: y 2 ( x ) = x + x 3 6 + x 4 2 + x 6 15 + x 7 7 + x 8 336 + x 9 40 + x 10 60 + 71 x 11 46200 + x 12 330 + x 13 26208 + 187 x 14 764400 + x 16 100800 + x 18 5757696 +

Thus, we continue to obtain the approximations till n = 5 for y n ( x ) , but for brevity the terms are not listed.

Solving the problem 1 by the DJM:

Consider the Eq. (24) with initial conditions y ( 0 ) = 0 and y ( 0 ) = 1 .

Integrating both sides of Eq. (24) twice from 0 to x and using the given initial conditions, we can have

(31)
y ( x ) = x + 1 6 x 3 + 0 x 0 x 6 y 2 ( τ ) d τ d τ , and reducing the integration in Eq. (31) from double to single (Wazwaz, 2015), we obtain
(32)
y ( x ) = x + 1 6 x 3 + 0 x ( x - τ ) ( 6 y 2 ( τ ) ) d τ ,
Then, the following relations can be defined: y 0 = x + 1 6 x 3 , N ( y n + 1 ) = 0 x ( x - τ ) ( 6 y n 2 ( τ ) ) d τ , n N { 0 } . By applying the DJM, we get y 0 = x + 1 6 x 3 , y 1 = x + x 3 6 + x 4 2 + x 6 15 + x 8 336 , y 2 = x + x 3 6 + x 4 2 + x 6 15 + x 7 7 + x 8 336 + x 9 40 + x 10 60 + 71 x 11 46200 + x 12 330 + x 13 26208 + 187 x 14 764400 + x 16 100800 + x 18 5757696 + . Therefore, we continue to get approximations till n = 5 , for y n ( x ) but they are not listed.

Solving the problem 1 by the BCM:

Consider Eq. (24), by following the similar procedure as given in the DJM, we get the Eq. (32). So, let y 0 = x + 1 6 x 3 and N ( y n - 1 ) = 0 x ( x - τ ) ( 6 y n - 1 2 ( τ ) ) d τ , n N .

Applying the BCM, we obtain: y 0 = 1 + x 2 2 , y 1 = 1 + 3 x 2 2 + x 3 6 + x 4 4 + x 5 40 + x 6 20 + x 8 224 , y 2 = 1 + 3 x 2 2 + x 3 6 + 3 x 4 4 + x 5 8 + 91 x 6 180 + 17 x 7 210 + 1409 x 8 6720 + 13 x 9 288 + 929 x 10 16800 + 38593 x 11 3326400 + 26683 x 12 2217600 + 1483 x 13 655200 + 6239 x 14 3057600 + 809 x 15 2352000 + 4583 x 16 16128000 + 2357 x 17 60928000 + 31273 x 18 959616000 + 37 x 19 10944000 + 3499 x 20 1191680000 + 109 x 21 526848000 + 81 x 22 386355200 + 3 x 23 507781120 + x 24 92323840 + x 26 3652812800 + . We continue to get the approximations till n = 5 , for brevity not listed.

It can be seen clearly that the obtained approximate solutions from the three proposed techniques are the same.

In order to access the convergence of the obtained approximate solution for problem 1, the relations given in Eqs. (17)–(21) will be used. The iterative scheme for Eq. (24) can be formulated as v 0 ( x ) = y 0 ( x ) = x + x 3 6 ,

By applying the TAM, the operator F [ v k ] as defined in Eq. (18) with the term S k which is the solution for the following problem, will be then

(33)
v k ( x ) = 6 ( i = 0 k - 1 v i ( x ) ) 2 + x , with v k ( 0 ) = 0 and v k ( 0 ) = 1 , k 1 . Or when applying the BCM, the S k represents the solution for the following problem,
(34)
v k = v 0 + 6 ( i = 0 k - 1 v i ( x ) ) 2 , k 1 .

On the other hand, one can use the iterative approximations directly when applying the DJM. Therefore, we have the following terms: v 1 = x 4 2 + x 6 15 + x 8 336 v 2 = x 7 7 + x 9 40 + x 10 60 + 71 x 11 46200 + x 12 330 + x 13 26208 + 187 x 14 764400 + x 16 100800 + x 18 5757696 v 3 = 2 x 10 105 + 41 x 12 9240 + 37 x 13 5460 + 527 x 14 1401400 + 1543 x 15 970200 + 105563 x 16 112112000 + 91061 x 17 571771200 As presented in the proof of the convergence of the proposed methods, the terms given by the series i = 0 v i ( x ) in (21) satisfy the convergent conditions by evaluating the β i values for this case, we get

(35)
β 0 = v 1 v 0 = 0.488265 < 1 β 1 = v 2 v 1 = 0.332461 < 1 β 2 = v 3 v 2 = 0.178092 < 1 β 3 = v 4 v 3 = 0.102841 < 1 β 4 = v 5 v 4 = 0.065685 < 1 where, the β i values for i 0 and 0 < x 1 , are less than 1, so the proposed iterative methods satisfy the convergence.

In order to examine the accuracy for the approximate solutions obtained by the proposed methods for Eq. (24) and since the exact solution is unknown, the maximal error remainder MER n will calculated. The error remainder function for problem 1 can be defined as

(36)
ER n ( x ) = y n ( x ) - 6 y n 2 ( x ) - x , and the MER n is:
(37)
MER n = max 0.01 x 0.1 | ER n ( x ) | ,

Fig. 1 shows the logarithmic plots for MER n of the approximate solution obtained by the proposed iterative methods which indicates the efficiency of these methods. It can be seen that by increasing the iterations, the errors will be decreasing.

Logarithmic plots for the MER n versus n from 1 to 5, for problem 1.
Fig. 1
Logarithmic plots for the MER n versus n from 1 to 5, for problem 1.

Also, we have made a numerical comparison between the solutions obtained by the proposed methods, the Range-Kutta (RK4) and Euler methods. The comparison for problem1 is given in Fig. 2. It can be seen, a good agreement has achieved.

The comparison of the solutions for problem 1.
Fig. 2
The comparison of the solutions for problem 1.

Problem 2:

The Painlevé equation II given by the following form (Hesameddini and Latifizadeh, 2012)

(38)
Y ( x ) = 2 y 3 ( x ) + xy ( x ) + μ , with the initial conditions: y ( 0 ) = 1 and y ( 0 ) = 0 . Eq. (39) will be solved by the three proposed iterative methods. The parameter μ in this work will be equal to 1.

Solving the problem 2 by the TAM:

In order to solve Eq. (39) by the TAM, we have the following form

(39)
L ( y ) = y , N ( y ) = 2 y 3 + xy and g ( x ) = 1 The initial problem is
(40)
L ( y 0 ( x ) ) = 0 with y 0 ( 0 ) = 1 and y 0 ( 0 ) = 0 .
The next problems can be found from the generalized iterative formula L ( y n + 1 ( x ) ) + N ( y n ( x ) ) = 0 , y n + 1 ( 0 ) = 1 and y n + 1 ( 0 ) = 0 . When evaluating the following initial problem (41), one can get y 0 ( x ) = 1 + x 2 2 . The first iteration y 1 ( x ) can be found by solving y 1 ( x ) = 2 y 1 2 ( x ) + xy 1 ( x ) + 1 with y 1 ( 0 ) = 1 and y 1 ( 0 ) = 0 . The solution will be y 1 ( x ) = 1 + 3 x 2 2 + x 3 6 + x 4 4 + x 5 40 + x 6 20 + x 8 224 + Applying the same process for y 2 as follows y 2 ( x ) = 2 y 2 2 ( x ) + xy 2 ( x ) + 1 , with y 2 ( 0 ) = 1 and y 2 ( 0 ) = 0 . By solving this problem, we have y 2 ( x ) = 1 + 3 x 2 2 + x 3 6 + 3 x 4 4 + x 5 8 + 91 x 6 180 + 17 x 7 210 + 1409 x 8 6720 + 13 x 9 288 + 929 x 10 16800 + 38593 x 11 3326400 + 26683 x 12 2217600 + 1483 x 13 655200 + 6239 x 14 3057600 + 809 x 15 2352000 + 4583 x 16 16128000 + 2357 x 17 60928000 + 31273 x 18 959616000 + 37 x 19 10944000 + 3499 x 20 1191680000 + 109 x 21 526848000 + 81 x 22 386355200 + 3 x 23 507781120 + x 24 92323840 + x 26 3652812800 +

Continuing in this manner to get approximations up to n = 5 for y n ( x ) , but for brevity they are not listed.

Solving the problem 2 by the DJM:

Consider the Eq. (39) with initial conditions y ( 0 ) = 1 and y ( 0 ) = 0 .

Integrate both sides of Eq. (39) twice from 0 to x with using the given initial conditions, we obtain

(41)
y ( x ) = 1 + x 2 2 + 0 x 0 x ( 2 y 3 ( τ ) + τ y ( τ ) ) d τ d τ , and reducing the integration in Eq. (42) from double to single (Wazwaz, 2015), we achieve
(42)
y ( x ) = 1 + x 2 2 + 0 x ( x - τ ) ( 2 y 3 ( τ ) + τ y ( τ ) ) d τ ,
Therefore, we have the following recurrence relation y 0 = 1 + x 2 2 , N ( y n + 1 ) = 0 x ( x - τ ) ( 2 y n 3 ( τ ) + τ y n ( τ ) ) d τ , n N { 0 } .

By applying the DJM, we get y 0 = 1 + x 2 2 , y 1 = x 2 + x 3 6 + x 4 4 + x 5 40 + x 6 20 + x 8 224 + , y 2 = x 4 2 + x 5 10 + 41 x 6 90 + 17 x 7 210 + 197 x 8 960 + 13 x 9 288 + 929 x 10 16800 + 38593 x 11 3326400 + 26683 x 12 2217600 + 1483 x 13 655200 + 6239 x 14 3057600 + 809 x 15 2352000 + 4583 x 16 16128000 + 2357 x 17 60928000 + 31273 x 18 959616000 + 37 x 19 10944000 + 3499 x 20 1191680000 + 109 x 21 526848000 + 81 x 22 386355200 + 3 x 23 507781120 + x 24 92323840 + x 26 3652812800 + . Therefore, we continue to get approximations till n = 5 , for y n ( x ) but for brevity the terms are not listed.

Solving the problem 2 by the BCM:

Consider Eq. (39), by following the similar way in the DJM, we get the Eq. (43). So, let y 0 = 1 + x 2 2 and N ( y n - 1 ) = 0 x ( x - τ ) ( 2 y n - 1 3 ( τ ) + τ y n - 1 ( τ ) ) d τ , n N .

Applying the BCM, we obtain: y 0 = 1 + x 2 2 , y 1 = 1 + 3 x 2 2 + x 3 6 + x 4 4 + x 5 40 + x 6 20 + x 8 224 , y 2 = 1 + 3 x 2 2 + x 3 6 + 3 x 4 4 + x 5 8 + 91 x 6 180 + 17 x 7 210 + 1409 x 8 6720 + 13 x 9 288 + 929 x 10 16800 + 38593 x 11 3326400 + 26683 x 12 2217600 + 1483 x 13 655200 + 6239 x 14 3057600 + 809 x 15 2352000 + 4583 x 16 16128000 + 2357 x 17 60928000 + 31273 x 18 959616000 + 37 x 19 10944000 + 3499 x 20 1191680000 + 109 x 21 526848000 + 81 x 22 386355200 + 3 x 23 507781120 + x 24 92323840 + x 26 3652812800 + . We continue to get the approximations till n = 5 , for brevity they are not listed.

The obtained solutions from the three proposed methods are same. Hence, as presented in the proof of the convergence for these methods in the previous section and by following similar procedure that presented for problem 1, the terms given by the series i = 0 v i ( x ) in Eq. (21) satisfy the convergent conditions by evaluating the β i values for each iterative method, we get

(43)
β 0 = v 1 v 0 = 0.997421 < 1 β 1 = v 2 v 1 = 0.983067 < 1 β 2 = v 3 v 2 = 0.736222 < 1 β 3 = v 4 v 3 = 0.500802 < 1 β 4 = v 5 v 4 = 0.32419 < 1

where, the β i values for i 0 and 0 < x 1 , are less than 1, so the proposed iterative methods are convergent.

Further investigation can be done, in order to examine the accuracy for the approximate solution for this problem; the error remainder function is evaluated:

(44)
ER n ( x ) = y n ( x ) - 2 y n 2 ( x ) - xy n ( x ) - 1 , and the MER n is:
(45)
MER n = max 0.01 x 0.1 | ER n ( x ) | ,
Fig. 3 shows the logarithmic plots for MER n of the approximate solution obtained by the proposed iterative methods which indicates the efficiency of these methods. Also, by increasing the iterations, the errors will be decreasing.
Logarithmic plots for the MER n versus n from 1 to 5, for problem 2.
Fig. 3
Logarithmic plots for the MER n versus n from 1 to 5, for problem 2.

The numerical comparison between the solutions obtained by the proposed methods, the Range-Kutta (RK4) and Euler methods for problem1 is given in Fig. 4, and good agreement is clearly obtained.

The comparison of the solutions for problem 2.
Fig. 4
The comparison of the solutions for problem 2.

Problem 3:

The pendulum equation presented by the form (Duan, 2011)

(46)
y ( x ) + sin y = 0 , with the given initial conditions: y ( 0 ) = 0 and y ( 0 ) = 1 , can be solved without linearization by using the approximation of sin y y - 1 6 y 3 + 1 120 y 5 as it used in (He, 1999). Hence, the pendulum equation (47) can be written by the following second order nonlinear ODE
(47)
y ( x ) + y - 1 6 y 3 + 1 120 y 5 = 0 .
The exact solution for Eq. (47) is expressed by the following Jacobi elliptic function y = 2 arcsin 1 2 sn x , 1 4 .

Solving the problem 3 by the TAM:

In order to solve the Pendulum equation given in Eq. (48) with the given conditions by the TAM, we have the following form

(48)
L ( y ) = y , N ( y ) = y - 1 6 y 3 + 1 120 y 5 , The initial problem is
(49)
L ( y 0 ( x ) ) = 0 with y 0 ( 0 ) = 0 and y 0 ( 0 ) = 1 .
The next problems can be found from the generalized iterative formula L ( y n + 1 ( x ) ) + N ( y n ( x ) ) = 0 , y n + 1 ( 0 ) = 0 and y n + 1 ( 0 ) = 1 . By evaluating the initial problem (50), one can get y 0 ( x ) = x . The first iteration y 1 ( t ) can be found by solving y 1 ( x ) = - ( y 0 - 1 6 y 0 3 + 1 120 y 0 5 ) withy 1 ( 0 ) = 0 and y 1 ( 0 ) = 1 . The solution will be then y 1 ( x ) = x - x 3 6 + x 5 120 - x 7 5040 . Applying the same process for y 2 as follows y 2 ( x ) = - y 1 - 1 6 y 1 3 + 1 120 y 1 5 with y 2 ( 0 ) = 0 and y 2 ( 0 ) = 1 . By solve this problem, we get y 2 ( t ) = x - x 3 6 + x 5 60 - x 7 420 + 127 x 9 362880 - 893 x 11 19958400 + 367 x 13 70761600 - 607 x 15 1143072000 + 56881 x 17 1243662336000 - 2521 x 19 781861248000 + 17 x 21 92177326080 - 22129 x 23 2591207055360000 + 17651 x 25 55306395648000000 - 61787 x 27 6470848290816000000 + 2021 x 29 8981758653235200000 - 73 x 31 18002231783424000000 + 13 x 33 245294925978009600000 - x 35 2211370923589632000000 + x 37 519802247686127616000000 . Continuing in this manner to get approximations up to n = 5 for y n ( t ) , but for brevity they are not listed.

Solving the problem 3 by the DJM:

Consider the Pendulum equation given in Eq. (48) with the given conditions y ( 0 ) = 0 and y ( 0 ) = 1 .

Integrating both sides of Eq. (48) twice from 0 to x , we get y ( t ) = x - 0 x 0 x ( y - 1 6 y 3 + 1 120 y 5 ) d τ d τ , and reducing the integration in Eq. (51) from double to single (Wazwaz, 2015), we obtain

(50)
y ( t ) = x - 0 x ( x - τ ) ( y - 1 6 y 3 + 1 120 y 5 ) d τ , Therefore, we have the following recurrence relation y 0 = x , N ( y n + 1 ) = - 0 x ( x - τ ) ( y n - 1 6 y n 3 + 1 120 y n 3 ) d τ , n N { 0 } . By applying the DJM, we get y 0 = x , y 1 = - x 3 6 + x 5 120 - x 7 5040 , y 2 = x 5 120 - 11 x 7 5040 + 127 x 9 362880 - 893 x 11 19958400 + 367 x 13 70761600 - 607 x 15 1143072000 + 56881 x 17 1243662336000 - 2521 x 19 781861248000 + 17 x 21 92177326080 - 22129 x 23 2591207055360000 + 17651 x 25 55306395648000000 - 61787 x 27 6470848290816000000 + 2021 x 29 8981758653235200000 - 73 x 31 18002231783424000000 + 13 x 33 245294925978009600000 - x 35 2211370923589632000000 + x 37 519802247686127616000000 . Therefore, we continue to get the other iterations till n = 5 , for y n ( x ) but for brevity the terms are not listed.

Solving the problem 3 by the BCM:

Consider Eq. (48), by applying the same way as in the DJM, we get the Eq. (52). So, let y 0 = x and N ( y n - 1 ) = - 0 x ( x - τ ) y n - 1 - 1 6 y n - 1 3 + 1 120 y n - 1 3 d τ , n N .By applying the BCM, we obtain: y 0 = x , y 1 = x - x 3 6 + x 5 120 - x 7 5040 , y 2 = x - x 3 6 + x 5 60 - x 7 420 + 127 x 9 362880 - 893 x 11 19958400 + 367 x 13 70761600 - 607 x 15 1143072000 + 56881 x 17 1243662336000 - 2521 x 19 781861248000 + 17 x 21 92177326080 - 22129 x 23 2591207055360000 + 17651 x 25 55306395648000000 - 61787 x 27 6470848290816000000 + 2021 x 29 8981758653235200000 - 73 x 31 18002231783424000000 + 13 x 33 245294925978009600000 - x 35 2211370923589632000000 + x 37 519802247686127616000000

We continue to get the other approximations till n = 5 , for brevity they are not listed.

The obtained solutions by the three proposed methods are equal to each other. Hence, as presented in the proof of the convergence in the previous section, the terms given by the series i = 0 v i ( x ) in Eq. (21) satisfy the convergent conditions by evaluating the β i values for each iterative method, we get

(51)
β 0 = v 1 v 0 = 0.114522 < 1 β 1 = v 2 v 1 = 0.0358901 < 1 β 2 = v 3 v 2 = 0.019054 < 1 β 3 = v 4 v 3 = 0.0114183 < 1 β 4 = v 5 v 4 = 0.00760755 < 1 where, the β i values for i 0 and 0 < x 1 , are less than 1, so the proposed iterative methods are convergent.

To examine the accuracy of the obtained approximate solution for this problem, the error remainder function is evaluated

(52)
ER n ( x ) = y n ( x ) + y n ( x ) - 1 6 y n 3 ( x ) + 1 120 y n 5 ( x ) , and the MER n is:
(53)
MER n = max 0 x 1 | ER n ( x ) | ,
Fig. 5 shows the logarithmic plots for MER n of the approximate solution obtained by the proposed iterative methods which indicates the efficiency of these methods. Moreover, by increasing the iterations, the errors will be decreasing.
Logarithmic plots for the MER n versus n from 1 to 5, for problem 3.
Fig. 5
Logarithmic plots for the MER n versus n from 1 to 5, for problem 3.

In addition, the numerical comparison between the solutions obtained by the proposed methods, exact solution, the Range-Kutta (RK4) and Euler methods for problem 3 is presented in Fig. 6. The agreement between the solutions can be clearly seen.

The comparison of the solutions for problem 3.
Fig. 6
The comparison of the solutions for problem 3.

Problem 4:

The nonlinear reactive transport model is given in the following form (Ellery and Simpson, 2011):

(54)
y ( x ) - Py ( x ) - Ay ( x ) B + y ( x ) = 0 , with the given boundary conditions: y ( 0 ) = 0 and y ( 1 ) = 1 , we select the values for the parameters P = 0 , A = 1 and B = 2 as given in (Ellery and Simpson, 2011). The nonlinear second order ODE (56) will be
(55)
y ( x ) = y ( x ) 2 + y ( x ) ,
After doing a simple manipulation for Eq. (57), we have
(56)
y ( x ) + 1 2 y ( x ) y ( x ) - 1 2 y ( x ) = 0 ,
with the following initial conditions y ( 0 ) = a and y ( 0 ) = 0 , where the unknown constant a will be evaluated later from the given boundary condition y ( 1 ) = 1 .

Solving the problem 4 by the TAM:

In order to solve the Eq. (58) with the given initial conditions by the TAM, we have the following form

(57)
L ( y ) = y , N ( y ) = 1 2 y ( x ) y ( x ) - 1 2 y ( x ) , The initial problem is
(58)
L ( y 0 ( x ) ) = 0 with y 0 ( 0 ) = a and y 0 ( 0 ) = 0 .
The next problems can be found from the generalized iterative formula L ( y n + 1 ( x ) ) + N ( y n ( x ) ) = 0 , y n + 1 ( 0 ) = a and y n + 1 ( 0 ) = 0 . By solving the initial problem (60), one can get y 0 ( x ) = a . The first iteration y 1 ( x ) can be found by solving y 1 ( x ) = - 1 2 y 0 ( x ) y 0 ( x ) + 1 2 y 0 ( x ) with y 1 ( 0 ) = a and y 1 ( 0 ) = 0 . The solution will be y 1 ( x ) = a + ax 2 4 . Applying the same process for y 2 , we have y 2 ( x ) = - 1 2 y 1 ( x ) y 1 ( x ) + 1 2 y 1 ( x ) with y 2 ( 0 ) = a and y 2 ( 0 ) = 0 . By solving this problem, we obtain y 2 ( x ) = a + ax 2 4 - a 2 x 2 8 + ax 4 96 - a 2 x 4 192 .

Also, y 3 can be found by solving the problem

y 3 ( x ) = - 1 2 y 2 ( x ) y 2 ( x ) + 1 2 y 2 ( x ) with y 3 ( 0 ) = a and y 3 ( 0 ) = 0 , we get y 3 ( x ) = a + ax 2 4 - a 2 x 2 8 + a 3 x 2 16 + ax 4 96 - a 2 x 4 64 + a 3 x 4 128 - a 4 x 4 768 + ax 6 5760 - a 2 x 6 1440 + 7 a 3 x 6 11520 - 7 a 4 x 6 46080 - a 2 x 8 86016 + a 3 x 8 86016 - a 4 x 8 344064 . Continuing in this manner to get the fourth and fifth approximations but for brevity they are not listed.

Solving problem 4 by the DJM:

Consider the Eq. (58) with the initial conditions y ( 0 ) = a and y ( 0 ) = 0 .

Integrating both sides of Eq. (58) twice from 0 to x, we get

(59)
y ( x ) = a + 0 x 0 x - 1 2 y ( τ ) y ( τ ) + 1 2 y ( τ ) d τ d τ , and reducing the integration in Eq. (61) from double to single (Wazwaz, 2015), we obtain
(60)
y ( x ) = a + 0 x ( x - τ ) - 1 2 y ( τ ) y ( τ ) + 1 2 y ( τ ) d τ .
Therefore, we have the following recurrence relation y 0 = a , N ( y n + 1 ) = 0 x ( x - τ ) - 1 2 y n ( τ ) y n ( τ ) + 1 2 y n ( τ ) d τ , n N { 0 } . By applying the DJM, we get y 0 = a , y 1 = ax 2 4 , y 2 = - 1 8 a 2 x 2 + ax 4 96 - a 2 x 4 192 , y 3 = a 3 x 2 16 - a 2 x 4 96 + a 3 x 4 128 - a 4 x 4 768 + ax 6 5760 - a 2 x 6 1440 + 7 a 3 x 6 11520 - 7 a 4 x 6 46080 - a 2 x 8 86016 + a 3 x 8 86016 - a 4 x 8 344064 . Therefore, we continue to get the other approximations till n = 5 for y n ( t ) , but for brevity they are not listed.

Solving the problem 4 by the BCM:

Consider Eq. (58), by followed the same way as in the DJM, we get the Eq. (62). So, let y 0 = a and N ( y n - 1 ) = 0 x ( x - τ ) - 1 2 y n - 1 ( τ ) y n - 1 ( τ ) + 1 2 y n - 1 ( τ ) d τ , n N .

By applying the BCM, we obtain:

(61)
y 0 = a , y 1 = a + ax 2 4 , y 2 = a + ax 2 4 - a 2 x 2 8 + ax 4 96 - a 2 x 4 192 , y 3 = a + ax 2 4 - a 2 x 2 8 + a 3 x 2 16 + ax 4 96 - a 2 x 4 64 + a 3 x 4 128 - a 4 x 4 768 + ax 6 5760 - a 2 x 6 1440 + 7 a 3 x 6 11520 - 7 a 4 x 6 46080 - a 2 x 8 86016 + a 3 x 8 86016 - a 4 x 8 344064 . We continue to get the approximations till n = 5 , for brevity they are not listed.

The value of a is evaluated by using the given boundary condition y ( 1 ) = 1 , so we have a = 0.8466736340782172 . Now, we can find the β i values in order to prove the convergence condition. Hence, the terms of the series i = 0 v i ( x ) given in Eq. (21), we get

(62)
β 0 = v 1 v 0 = 0.397909 < 1 β 1 = v v 1 = 0.404609 < 1 β 2 = v 3 v 2 = 0.411597 < 1 β 3 = v 4 v 3 = 0.407664 < 1 β 4 = v 5 v 4 = 0.40779 < 1 where, the β i values for i 0 and 0 < x 1 , are less than 1, so the proposed iterative methods are convergent.

To examine the accuracy for the approximate solution for this problem; the error remainder function is calculated

(63)
ER n ( x ) = y n ( x ) + 1 2 y n ( x ) y n ( x ) - 1 2 y n ( x ) , and the MER n is:
(64)
MER n = max 0 x 1 | ER n ( x ) | ,

Fig. 7 shows the logarithmic plots for MER n of the approximate solution obtained by the proposed iterative methods which indicates the methods are reliable and effective. Also, by increasing the iterations, the errors will be decreasing.

Logarithmic plots for the MER n versus n from 1 to 5, for problem 4.
Fig. 7
Logarithmic plots for the MER n versus n from 1 to 5, for problem 4.

Moreover, the numerical comparison between the solutions obtained by the proposed methods, the Range-Kutta (RK4) and Euler methods for problem 4 is presented in Fig. 8 and a good agreement can be clearly seen.

The comparison of the solutions for problem 4.
Fig. 8
The comparison of the solutions for problem 4.

Problem 5:

The temperature distribution equation in a uniformly thick rectangular fin radiation to free space is given in the following form (Mohyud-Din et al., 2017)

(65)
y ( x ) - y 4 ( x ) = 0 , with the given boundary conditions: y ( 0 ) = 0 and y ( 1 ) = 1 .

The Eq. (66) will be solved by the three iterative methods with the initial conditions y ( 0 ) = a and y ( 0 ) = 0 , where the unknown constant a will be evaluated later from the given boundary condition y ( 1 ) = 1 . The parameter ε affects on the obtained solution will be seen numerically later.

Solving the problem 5 by the TAM:

In order to solve the Eq. (66) with the initial conditions by the TAM, we have the following form

(66)
L ( y ) = y , N ( y ) = y 4 ( x ) , The initial problem is
(67)
L ( y 0 ( x ) ) = 0 with y 0 ( 0 ) = a and y 0 ( 0 ) = 0 .
The next problems can be found from the generalized iterative formula L ( y n + 1 ( x ) ) + N ( y n ( x ) ) = 0 , y n + 1 ( 0 ) = a and y n + 1 ( 0 ) = 0 . By solving the initial problem (68), one can get y 0 ( x ) = a . The first iteration y 1 ( x ) can be found by solving

y 1 ( x ) = y 0 4 ( x ) with y 1 ( 0 ) = a and y 1 ( 0 ) = 0 .The solution will be y 1 ( x ) = a + 1 2 a 4 x 2 . Applying the same process for y 2 as follows y 2 ( x ) = y 1 4 ( x ) with y 2 ( 0 ) = a and y 2 ( 0 ) = 0 . We solve this problem, we get y 2 ( x ) = a + 1 2 a 4 x 2 + 1 6 a 7 x 4 2 + 1 20 a 10 x 6 3 + 1 112 a 13 x 8 4 + a 16 x 10 5 1440 . Continuing in this manner to get the other approximations till n = 5 , but for brevity the evaluated terms are not listed.

Solving the problem 5 by the DJM:

Consider the Eq. (66) with the initial conditions y ( 0 ) = a and y ( 0 ) = 0 .

Integrating both sides of Eq. (66) twice from 0 to x, we get x

(68)
y ( x ) = a + 0 x 0 x ( y 4 ( τ ) ) d τ d τ , and reducing the integration in Eq. (69) from double to single (Wazwaz, 2015), we have
(69)
y ( x ) = a + 0 x ( x - τ ) ( y 4 ( τ ) ) d τ .
Hence, we have the following recurrence relation y 0 = a , N ( y n + 1 ) = 0 x ( x - τ ) ( y 4 ( τ ) ) d τ , n N { 0 } . By applying the DJM, we get
(70)
y 0 = a , y 1 = 1 2 a 4 x 2 , y 2 = 1 6 a 7 x 4 2 + 1 20 a 10 x 6 3 + 1 112 a 13 x 8 4 + a 16 x 10 5 1440 .
Therefore, we continue to get the other approximations till n = 5 , for y n ( x ) but the terms are not listed.

Solving the problem 5 by the BCM:

Consider Eq. (66), by applying the same procedure as in the DJM, we get the Eq. (70). So, let y 0 = a and N ( y n - 1 ) = 0 x ( x - τ ) ( y n - 1 4 ( τ ) ) d τ , n N .

Applying the BCM, we obtain: y 0 = a , y 1 = a + 1 2 a 4 x 2 , y 2 = a + 1 2 a 4 x 2 + 1 6 a 7 x 4 2 + 1 20 a 10 x 6 3 + 1 112 a 13 x 8 4 + a 16 x 10 5 1440 .

We continue to get the approximations till n = 5 , for brevity they are not listed.

The value of a is obtained by using the given boundary condition y ( 1 ) = 1 for several values of a depends on the values of . So, at = 0.1 we get a = 0.9568205007628632 . Now, we can find the β i values in order to prove the convergence condition. Hence, the terms of the series i = 0 v i ( x ) given in Eq. (21), we have

(71)
β 0 = v 1 v 0 = 0.0697115 < 1 β 1 = v 2 v 1 = 0.0243227 < 1 β 2 = v 3 v 2 = 0.0111531 < 1 β 3 = v 4 v 3 = 0.00621257 < 1 β 4 = v 5 v 4 = 0.00394466 < 1 where, the β i values for i 0 and 0 < x 1 , are less than 1, so the proposed iterative methods are convergent.

To examine the accuracy for the approximate solution for this problem; the error remainder function is ER n ( x ) = y n ( x ) - y n 4 ( x ) ,

and the MER n is:

(72)
MER n = max 0 x 1 | ER n ( x ) | ,

Fig. 9 shows the logarithmic plots for MER n of the approximate solution obtained by the proposed iterative methods, it can be seen that by increasing the iterations, the errors will be decreasing.

Logarithmic plots for the MER n versus n from 1 to 5, for problem 5.
Fig. 9
Logarithmic plots for the MER n versus n from 1 to 5, for problem 5.

The numerical comparison between the solutions obtained by the proposed methods, the Range-Kutta (RK4) and Euler methods for problem 5 is given in Fig. 10, once again a good agreement between the solution can be noticed.

The comparison of the solutions for problem 5.
Fig. 10
The comparison of the solutions for problem 5.

Moreover, we have plotted the effect of for different values given in Fig. 11.

The numerical affections of ∈ values for problem 5.
Fig. 11
The numerical affections of values for problem 5.

Problem 6:

The motion equation for a system of mass with serial linear and nonlinear stiffness on a frictionless contact surface is presented by (Ganji and Babazadeh, 2009):

(73)
( 1 + 0.1365 y 2 ) y + 0.2730 y ( y ) 2 + 4.5454 y + 2.2727 y 3 = 0 , For more details for this model, we refer the reader to (Ganji and Babazadeh, 2009). Eq. (74) can be manipulated to get nonlinear the second order ODE
(74)
y + 0.1365 y 2 y + 0.2730 y ( y ) 2 + 4.5454 y + 2.2727 y 3 = 0 ,
with the initial conditions: y = 0.5 and y = 0 at x = 0 .

Solving the problem 6 by the TAM:

In order to solve Eq. (75) with the initial conditions by the TAM, we have the following operators

(75)
L ( y ) = y , N ( y ) = - ( 0.1365 y 2 y + 0.2730 y ( y ) 2 + 4.5454 y + 2.2727 y 3 ) The initial problem is
(76)
L ( y 0 ) = 0 with y 0 ( 0 ) = 0.5 and y 0 ( 0 ) = 0 .
The next problems can be found from the generalized iterative formula

L ( y n + 1 ) + N ( y n ) = 0 , y n + 1 ( 0 ) = 0.5 and y n + 1 ( 0 ) = 0 .

Solving the initial problem (77), one can get y 0 = 0.5 .

The first iteration y 1 ( x ) can be found by solving

y 1 = - ( 0.1365 y 0 2 y 0 + 0.2730 y 0 ( y 0 ) 2 + 4.5454 y 0 + 2.2727 y 0 3 ) with y 1 ( 0 ) = 0.5 and y 1 ( 0 ) = 0 .

The solution will be y 1 = 0.5 - 1.27839375 x 2 . Applying the same process for y 2 as follows

y 2 = - ( 0.1365 y 1 2 y 1 + 0.2730 y 1 ( y 1 ) 2 + 4.5454 y 1 + 2.2727 y 1 3 ) with y 2 ( 0 ) = 0.5 and y 2 ( 0 ) = 0 .

We solve this problem and then we have y 2 = 0.5 - 1.23476856328125 x 2 + 0.5542817560763965 x 4 - 0.09065096778687438 x 6 + 0.08479065714299731 x 8 . Continuing in this manner to get other approximations till n = 5 ; but for brevity they are not listed.

Solving problem 6 by the DJM:

Consider Eq. (77) with the initial conditions y ( 0 ) = 0.5 and y ( 0 ) = 0 .

Integrating both sides of Eq. (77) twice from 0 to x, we get

(77)
y = 0.5 - 0 x 0 x ( 0.1365 y 2 y + 0.2730 y ( y ) 2 + 4.5454 y + 2.2727 y 3 ) d τ d τ , and reducing the integration in Eq. (78) from double to single (Wazwaz, 2015), we achieve
(78)
y = 0.5 - 0 x ( x - τ ) ( 0.1365 y 2 y + 0.2730 y ( y ) 2 + 4.5454 y + 2.2727 y 3 ) d τ .
Hence, we have the following recurrence relation y 0 = 0.5 , N ( y n + 1 ) = - 0 x ( x - τ ) ( 0.1365 y n 2 y n + 0.2730 y n ( y n ) 2 + 4.5454 y n + 2.2727 y n 3 ) d τ , n N { 0 } .

By applying the DJM, we get y 0 = 0.5 , y 1 = - 1.27839375 x 2 , y 2 = 0.04362518671875004 x 2 + 0.5542817560763965 x 4 - 0.09065096778687438 x 6 + 0.08479065714299731 x 8 . Therefore, we continue to get approximations till n = 5 for y n , but the terms are not listed.

Solving the problem 6 by the BCM:

Consider Eq. (75), by following similar way used in the DJM, we get the Eq. (79). So, let y 0 = a and N ( y n - 1 ) = - 0 x ( x - τ ) ( 0.1365 y n - 1 2 y n - 1 + 0.2730 y n - 1 ( y n - 1 ) 2 + 4.5454 y n - 1 + 2.2727 y n - 1 3 ) d τ , n N .Applying the BCM, we obtain:

(79)
y 0 = 0.5 , y 1 = 0.5 - 1.27839375 x 2 , y 2 = 0.5 - 1.23476856328125 x 2 + 0.5542817560763965 x 4 - 0.09065096778687438 x 6 + 0.08479065714299731 x 8 . We continue to get the approximations till n = 5 , for brevity they are not listed.

Now, the β i values are evaluated in order to prove the convergence condition. Hence, the terms of the series i = 0 v i ( x ) given in Eq. (21), we have

(80)
β 1 = v 2 v 1 = 0.378314 < 1 β 2 = v 3 v 2 = 0.190321 < 1 β 3 = v 4 v 3 = 0.0927949 < 1 β 4 = v 5 v 4 = 0.0772565 < 1 where, the β i values for i 1 and 0 < x 1 , are less than 1, so the proposed iterative methods are convergent.

To examine the accuracy for the approximate solution for this problem; the error remainder function is

(81)
ER n ( x ) = y n ( x ) + 0.1365 y n 2 ( x ) y n ( x ) + 0.273 y n ( x ) ( y n ( x ) ) 2 + 4.5454 y n ( x ) + 2.2727 y n 3 ( x ) , and the MER n is:
(82)
MER n = max 0 x 1 | ER n ( x ) | ,

Fig. 12 shows the logarithmic plots for MER n of the approximate solution obtained by the proposed iterative methods, also, by increasing the iterations, the errors will be decreasing.

Logarithmic plots for the MER n versus n from 1 to 5, for problem 6.
Fig. 12
Logarithmic plots for the MER n versus n from 1 to 5, for problem 6.

The numerical comparison between the solutions obtained by the proposed methods, the Range-Kutta (RK4) and Euler methods for problem 6 is presented in Fig. 13.

The comparison of the solutions for problem 6.
Fig. 13
The comparison of the solutions for problem 6.

In Table 1, the convergence rate (CR) for all problems was estimated by using the following formula ρ = log MER 4 MER 3 log MER 3 MER 2 ,

Table 1 The rate of convergence for the six problems.
Methods Problem 1 Problem 2 Problem 3 Problem 4 Problem 5 Problem 6
The CR 1.03613 1.10076 1.13879 1. 1.14431 1.05231

Linear convergent has been achieved in all methods i.e. ρ 1 .

5

5 Conclusion

In our study of this paper, we have successfully used the TAM, DJM and BCM to solve several problems contain nonlinear second order ODEs that arising in physics problems. Each solution has been obtained in a series form. Then we solved these problems by numerical methods which are the Rang-Kutta (RK4) and Euler methods. We compared the numerical results with approximate solutions and were in good agreement. We have used the calculations in this study by the aid of Mathematica®10.

Acknowledgement

The author would like to thank the anonymous referees, Managing Editor and Editor in Chief for their valuable suggestions.

References

  1. , . A semi-analytical iterative method for solving nonlinear thin film flow problems. Chaos, Solitons Fractals. 2017;99:52-56.
    [Google Scholar]
  2. , , . A reliable iterative method for solving Volterra integro-differential equations and some applications for the Lane-Emden equations of the first kind. Monthly Not. R. Astronomical Soc.. 2015;448:3093-3104.
    [Google Scholar]
  3. , , . A semi analytical iterative technique for solving Duffing equations. Int. J. Pure Appl. Math.. 2016;108(4):871-885.
    [Google Scholar]
  4. AL-Jawary, M.A., Hatif, S., 2017. A semi-analytical iterative method for solving differential algebraic equations Ain Shams Eng. J., (In Press).
  5. , , . A semi-analytical iterative technique for solving chemistry problems. J. King Saud Univ.. 2017;29(3):320-332.
    [Google Scholar]
  6. , , , . A semi-analytical method for solving Fokker-Planck’s equations. J. Assoc. Arab Univ. Basic Appl. Sci.. 2017;24:254-262.
    [Google Scholar]
  7. , , . Solving evolution equations using a new iterative method. Numer. Methods Partial Diff. Eq.. 2010;26(4):906-916.
    [Google Scholar]
  8. , , . New iterative method: application to partial differential equations. Appl. Math. Comput.. 2008;203(2):778-783.
    [Google Scholar]
  9. , , . Solving fractional diffusion-wave equations using a new iterative method. Fractional Calculus Appl. Anal.. 2008;11(2):193-202.
    [Google Scholar]
  10. , , . Solving nonlinear functional equation using Banach contraction principle. Far East J. Appl. Math.. 2009;34(3):303-314.
    [Google Scholar]
  11. , , . An iterative method for solving nonlinear functional equations. J. Math. Anal. Appl.. 2006;316(2):753-763.
    [Google Scholar]
  12. , . New recurrence algorithms for the nonclassic Adomian polynomials. Comput. Math. Appl.. 2011;62:2961-2977.
    [Google Scholar]
  13. , , , , . An iterative method for solving partial differential equations and solution of Korteweg-de Vries equations for showing the capability of the iterative method. World Appl. Programming. 2013;3(8):320-327.
    [Google Scholar]
  14. , , . An analytical method to solve a general class of nonlinear reactive transport models. Chem. Eng. J.. 2011;169:313-318.
    [Google Scholar]
  15. , , . M. H. Jalaei,·H. Tashakkorian, Application of He’s variational iteration method for solving nonlinear BBMB equations and free vibration of systems. Acta Applicandae Mathematicae. 2009;106:359-367.
    [Google Scholar]
  16. , . Variational iteration method – a kind of non-linear analytical technique: some examples. Int. J. Non Linear Mech.. 1999;34:699-708.
    [Google Scholar]
  17. , , . A reliable treatment of homotopy purturbation method for solving second Painlevé equations. Int. J. Res. Rev. Appl. Sci.. 2012;12(2):201-206.
    [Google Scholar]
  18. , , . The use of variational iteration method and homotopy perturbation method for Painlevé equation I. Appl. Math. Sci.. 2009;3(38):1861-1871.
    [Google Scholar]
  19. , , . Some Topics in Nonlinear Functional Analysis. John Wiley and Sons; .
  20. , , . Application of homotopy analysis method in nonlinear oscillations. J. Appl. Mech.. 1998;65(4):914-922.
    [Google Scholar]
  21. , , , , . Optimal variational iteration method for nonlinear problems. J. Assoc. Arab Univ. Basic Appl. Sci.. 2017;24:191-197.
    [Google Scholar]
  22. , . A study on the convergence of variational iteration method. Math. Comput. Modell.. 2010;51(9–10):1181-1192.
    [Google Scholar]
  23. , , . A new iterative technique for solving nonlinear second order multi-point boundary value problems. Appl. Math. Comput.. 2011;218(4):1457-1466.
    [Google Scholar]
  24. , , . A computational iterative method for solving nonlinear ordinary differential equations. LMS J. Comput. Marhematics. 2015;18(1):730-753.
    [Google Scholar]
  25. , . A first course in integral equations. World Scientific Publishing Company; .
Show Sections