7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Editorial
Invited review
Letter to the Editor
Original Article
REVIEW
Review Article
SHORT COMMUNICATION
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Editorial
Invited review
Letter to the Editor
Original Article
REVIEW
Review Article
SHORT COMMUNICATION
View/Download PDF

Translate this page into:

ORIGINAL ARTICLE
23 (
1
); 53-61
doi:
10.1016/j.jksus.2010.06.007

Some resolvent methods for general variational inclusions

Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan
Mathematics Department, College of Science, King Saud University, Riyadh, Saudi Arabia

*Corresponding author at: Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan. Tel.: +92 23454027532 noormaslam@hotmail.com (Muhammad Aslam Noor),

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Available online 20 June 2010

Abstract

In this paper, we consider and analyze some classes of resolvent-splitting methods for solving the general variational inclusions using the technique of updating the solution. These resolvent-splitting methods are self-adaptive-type methods, where the corrector step size involves the resolvent equation. We prove that the convergence of these new methods only require the pseudomonotonicity, which is a weaker condition than monotonicity. These new methods differ from the previously known splitting and inertial proximal methods for solving the general variational inclusions and related complementarity problems. The proposed methods include several new and known methods as special cases. Our results may be viewed as refinement and improvement of the previous known methods.

Keywords

Variational inequalities
Splitting methods
Inertial proximal method
Auxiliary principle
Fixed-point
Convergence
1

1 Introduction

Variational inequalities theory has played a significant and fundamental role in the development of new and innovative techniques for solving complex and complicated problems arising in pure and applied sciences, see (Alvarez, 2000; Alvarez and Attouch, 2001; Brezis, 1973; El Farouq, 2001; Giannessi and Maugeri, 1995; Giannessi et al., 2001; Glowinski and Tallec, 1989; Haugruge et al., 1998; He and Liao, 2002; Kinderlehrer and Stampacchia, 2000; Moudafi and Noor, 1999; Moudafi and Thera, 1997; Noor, 1988, 1993, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a–d; Noor and Noor, 2010, 2004; Noor et al., 1993; Noor and Rassias, 2002; Patriksson, 1998; Shi, 1991; Stampacchia, 1964; Tseng, 2000; Uko, 1998; Xiu et al., 2001). Variational inequalities have been extended and generalized in various directions using novel and innovative techniques. A useful and important generalization is called the general variational inclusion involving the sum of two nonlinear operators T and A. Moudafi and Noor (1999) studied the sensitivity analysis of variational inclusions by using the technique of the resolvent equations. Recently much attention has been given to develop iterative algorithms for solving the variational inclusions. It is known that such algorithms require an evaluation of the resolvent operator of the type ( I + ρ ( T + A ) ) - 1 . The main difficulty with such problems is that the resolvent operator may be hard to invert. This difficulty has been overcome by using the resolvent operators ( I + ρ T ) - 1 and ( I + ρ A ) - 1 separately rather than ( I + ρ ( T + A ) ) - 1 . Such a technique is called the splitting method. These methods for solving variational inclusions have been studied extensively, see, for example (Glowinski and Tallec, 1989; Moudafi and Thera, 1997; Noor, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a) and the references therein. In the context of the mixed variational inequalities (variational inclusions), Noor (2000, 2001a, 2002b, 2003, 2004) has used the resolvent operator and resolvent equations techniques to suggest and analyze a number of resolvent type iterative methods. A useful feature of these splitting methods is that the resolvent step involves the subdifferential of the proper, convex and lower-semicontinuos function only and the other part facilitates the problem decomposition.

Noor (1998) introduced and considered the general variational inclusion, which is an important and significant generalization of variational inequalities. It turned out that a wide class of nonsymmetric, odd-order free, moving, unilateral and equilibrium problems arising in elasticity, transportation, circuit analysis, oceanography, nonlinear optimization, finance, economics and operations research can be studied via general variational inclusions, see (Alvarez, 2000; Alvarez and Attouch, 2001; Brezis, 1973; El Farouq, 2001; Giannessi and Maugeri, 1995; Giannessi et al., 2001; Glowinski and Tallec, 1989; Haugruge et al., 1998; He and Liao, 2002; Kinderlehrer and Stampacchia, 2000; Moudafi and Noor, 1999; Moudafi and Thera, 1997; Noor, 1988, 1993, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a–d; Noor and Noor, 2010, 2004; Noor et al., 1993; Noor and Rassias, 2002; Patriksson, 1998; Shi, 1991; Stampacchia, 1964; Tseng, 2000; Uko, 1998; Xiu et al., 2001). Variational inclusion theory is experiencing an explosive growth in both theory and applications: as consequence, several numerical techniques including resolvent operator, resolvent equations, auxiliary principle, decomposition and descent are being developed for solving various classes of variational inclusions and related optimization problems. Resolvent methods and its variants forms including the resolvent equations represent important tools for finding the approximate solution of variational inclusions. The main idea in this technique is to establish the equivalence between the variational inclusions and the fixed-point problem by using the concept of resolvent operator. This alternative formulation has played a significant part in developing various resolvent methods for solving variational inclusions. It is well known that the convergence of the resolvent methods requires that the operator must be strongly monotone and Lipschitz continuous. Unfortunately these strict conditions rule out many applications of this method. This fact motivated to modify the resolvent method or to develop other methods. The extragradient method overcome this difficulty by performing an additional forward step and a projection at each iteration according to the double resolvent. This method can be viewed as predictor-corrector method. Its convergence requires that a solution exists and the monotone operator is Lipschitz continuous. When the operator is not Lipschitz continuous or when the Lipschitz continuous constant is not known, the extraresolvent method and its variant forms require an Armijo-like line search procedure to compute the step size with a new projection need for each trial, which leads to expansive computations. To overcomes these difficulties, several modified resolvent and extraresolvent-type methods have been suggested and developed for solving variational inequalities, see (Noor, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a) and the references therein. Glowinski and Tallec (1989) has suggested and analyzed some three-step splitting methods for solving variational inclusions problems by using the Lagrange multipliers technique. They have shown that three-step splitting are numerically more efficient as compared with one-step and two-step splitting methods. They have studied the convergence of these splitting methods under the assumption that the underlying operator is monotone and Lipschitz continuous. Noor (2004) has suggested some three-step projection-splitting methods for various classes of variational inequalities and variational inclusions using the technique of updating the solution, in which the order of T and J A + = ( I + ρ A ) - 1 , resolvent operator associated with the maximal monotone operator A , has not been changed. These three-step splitting methods are compatible with the three-step splitting methods of Glowinski and Tallec (1989). For the applications and convergence analysis of three-step splitting method, see (Glowinski and Tallec, 1989; He and Liao, 2002; Moudafi and Thera, 1997; Noor, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004) and the references therein.

In this paper, we suggest and analyze a class of self-adaptive resolvent methods by modifying the fixed-point equations involving a generalized residue vector associated with the variational inclusions. These methods are simple and robust. The searching direction in these methods is a combination of the generalized resolvent residue and the modified extraresolvent direction. These new methods are different from the existing one-step, two-step and three-step projection-splitting methods. We prove that the convergence of the proposed methods only requires the pseudomonotonicity, which is weaker condition than monotonicity.

Noor (2004) and El Farouq (2001) has used the auxiliary principle technique to suggest and analyze a class of proximal (implicit) methods. Alvarez (2000) and Alvarez and Attouch (2001) have considered an inertial proximal method for maximal monotone operators via the discretization of a second order differential equation in time, which includes the classical proximal method. We again use the equivalent fixed-point formulation of the variational inclusions to suggest an inertial proximal method for general variational inclusions. We show that the convergence of the inertial proximal method requires the pseudomonotonicity, which is a weaker condition than monotonicity. Thus it is clear that our results improve the convergence criteria of the inertial proximal methods of Alvarez and Attouch (2001). Our proof of convergence is very simple as compared with other methods. Since general variational inclusions include classical variational inclusions and general (quasi) complementarity problems as special cases, results obtained in this paper continue to hold for these problems. The comparison of these methods with the existing ones is an interesting problem for future research work.

2

2 Preliminaries

Let H be a real Hilbert space, whose inner product and norm are denoted by · , · and · , respectively. Let K be a closed convex set in H and T , g : H H be nonlinear operators. Let φ : H R { + } be a proper, convex and lower semicontinuous function.

For given nonlinear operators T , g : H H , and a maximal monotone operator A : H H , consider the problem of finding u H such that

(2.1)
0 Tu + A ( g ( u ) ) , which is called the general variational inclusion. Problem (2.2) is also known as finding the zero of the sum of two monotone operators. This problem is being studied extensively and has important applications in operations research and engineering sciences. Variational inclusions are being studied and considered by many authors including Moudafi and Noor (1999) and Uko (1998).

Note that if g I , the identity operator, then problem (2.1) is equivalent to finding u H such that

(2.2)
0 Tu + A ( u ) . Problem (2.2) is known as finding the zero of the sum of two monotone operators, see (Glowinski and Tallec, 1989; Haugruge et al., 1998; Moudafi and Noor, 1999; Moudafi and Thera, 1997).

If A ( . ) = φ ( . ) , where φ ( . ) is the subdifferential of a proper, convex and lower semicontinuous function φ : H R { + } , then problem (2.1) reduces to finding u H such that

(2.3)
0 Tu + φ ( g ( u ) ) or equivalently, finding u H such that
(2.4)
Tu , g ( v ) - g ( u ) + φ ( g ( v ) ) - φ ( g ( u ) ) 0 , v H .
The inequality of type (2.4) is called the general mixed variational inequality or the general variational inequality of the second kind. It can be shown that a wide class of linear and nonlinear problems arising in pure and applied sciences can be studied via the general mixed variational inequalities (2.4).

We remark that if g I , the identity operator, then the problem (2.4) is equivalent to finding u H such that

(2.5)
Tu , v - u + φ ( v ) - φ ( u ) 0 , v H , which are called the mixed variational inequalities. For the applications, numerical methods and formulations, see (Alvarez and Attouch, 2001; Brezis, 1973; El Farouq, 2001; Giannessi and Maugeri, 1995, Giannessi et al., 2001; Glowinski and Tallec, 1989; Haugruge et al., 1998; Kinderlehrer and Stampacchia, 2000; Noor, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a) and the references therein.

We note that if φ is the indicator function of a closed convex set K in H, that is, φ ( u ) I K ( u ) = 0 , if u K + , otherwise , then the general mixed variational inequality (2.4) is equivalent to finding u H , g ( u ) K such that

(2.6)
Tu , g ( v ) - g ( u ) 0 , v H : g ( v ) K . Problem (2.6) is called the general variational inequality, which was first introduced and studied by Noor (1988). It has been shown that a large class of unrelated odd-order and nonsymmetric obstacle, unilateral, contact, free, moving, and equilibrium problems arising in regional, physical, mathematical, engineering and applied sciences can be studied in the unified and general framework of the general variational inequalities (2.6). It has been shown in Noor (2000, 2004) that a class of quasi variational inequalities and nonlinear nonconvex programming problems can be viewed as the general variational inequalities (2.6). For the applications, formulation and numerical methods of general variational inequalities (2.1), see (Noor, 1988, 1993, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a–d; Noor and Noor, 2010, 2004; Noor et al., 1993; Noor and Rassias, 2002) and the references therein.

For g I , where I is the identity operator, problem (2.6) is equivalent to finding u K such that

(2.7)
Tu , v - u 0 v K , which is known as the classical variational inequality introduced and studied by Stampacchia (1964). For recent state-of-the-art, see (Alvarez, 2000; Alvarez and Attouch, 2001; Brezis, 1973; El Farouq, 2001; Giannessi and Maugeri, 1995; Giannessi et al., 2001; Glowinski and Tallec, 1989; Haugruge et al., 1998; He and Liao, 2002; Kinderlehrer and Stampacchia, 2000; Moudafi and Noor, 1999; Moudafi and Thera, 1997; Noor, 1988, 1993, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a–d; Noor and Noor, 2010, 2004; Noor et al., 1993; Noor and Rassias, 2002; Patriksson, 1998; Shi, 1991; Stampacchia, 1964; Tseng, 2000; Uko, 1998; Xiu et al., 2001).

From now onward, we assume that g is onto K unless, otherwise specified.

If N ( u ) = { w H : w , v - u 0 , for all v K } is a normal cone to the convex set K at u, then the general variational inequality (2.6) is equivalent to finding u H , g ( u ) K such that 0 Tu + N ( g ( u ) ) , which are known as the general nonlinear equations, see (Uko, 1998).

If T tg is the projection of - Tu at g ( u ) K , then it has been shown that the general variational inequality problem (2.6) is equivalent to finding u H , g ( u ) K such that T tg ( u ) = 0 , which are known as the tangent projection equations, see (Xiu et al., 2001). This equivalence has been used to discuss the local convergence analysis of a wide class of iterative methods for solving general variational inequalities (2.6).

If K * = { u H : u , v 0 , v K } is a polar (dual) cone of a convex cone K in H, then problem (2.1) is equivalent to finding u H such that

(2.8)
g ( u ) K , Tu K * and Tu , g ( u ) = 0 , which is known as the general complementarity problem, see Noor (1988, 1993, 2004). For g ( u ) = m ( u ) + K , where m is a point-to-point mapping, problem (2.8) is called the implicit (quasi) complementarity problem. If g I , then problem (2.8) is known as the generalized complementarity problem. Such problems have been studied extensively in the literature, see the references. For suitable and appropriate choice of the operators and spaces, one can obtain several classes of variational inclusions and related optimization problems.

We now recall the following well known result and concepts.

Definition 2.1

Definition 2.1 Brezis, 1973

If A is a maximal monotone operator on H , then, for a constant ρ > 0 , the resolvent operator associated with A is defined by J A ( u ) = ( I + ρ A ) - 1 ( u ) , u H , where I is the identity operator. It is well known that a monotone operator is maximal if and only if its resolvent operator is defined everywhere. In addition, the resolvent operator is a single-valued and nonexpansive, that is, J A ( u ) - J A ( v ) u - v , u , v H .

Remark 2.1

It is well known that the subdifferential φ of a proper, convex and lower semicontinuous function φ : H R { + } is a maximal monotone operator, we denote by J φ ( u ) = ( I + ρ φ ) - 1 ( u ) , for all u H , the resolvent operator associated with φ , which is defined everywhere on H.

Lemma 2.1

Lemma 2.1 Brezis, 1973

For a given z H , u H satisfies the inequality

(2.9)
u - z , v - u + ρ φ ( v ) - ρ φ ( u ) 0 , v H , if and only if
(2.10)
u = J φ z ,
where J φ = ( I + ρ φ ) - 1 is the resolvent operator and ρ > 0 is a constant. This property of the resolvent operator J φ plays an important part in our results.

We now introduce the general resolvent equations. To be more precise, let R A I - J A , where I is the identity operator and J A = ( I + ρ A ) - 1 is the resolvent operator. Let g : H H be an invertible operator. For given nonlinear operators T , g : H H , consider the problem of finding z H such that

(2.11)
Tg - 1 J A z + ρ - 1 R A z = 0 , where ρ > 0 is a constant. Equations of type (2.11) are called the general resolvent equations, see Noor (1998).

If g = I , then general resolvent Eq. (2.11) collapse to finding z H such that

(2.12)
TJ A z + ρ - 1 R A z = 0 , which are known as the resolvent equations. The resolvent equations have been studied by Moudafi and Thera (1997) and Noor (1997b).

If A ( . ) φ ( . ) , where φ is the subdifferential of a proper, convex and lower semicontinuous function φ : H R { + } , then general resolvent Eq. (2.11) are equivalent to finding z H such that

(2.13)
Tg - 1 J φ z + ρ - 1 R φ z = 0 , which are also called the general resolvent equations, introduced and studied by Noor (1998, 2001b) in relation with the general mixed variational inequalities (2.4). Using these resolvent equations, Noor (1998, 2001b) has suggested and analyzed a number of iterative methods for solving general mixed variational inequalities. If g I , the identity operator, then the problem (2.13) reduces to finding z H such that
(2.14)
TJ φ z + ρ - 1 R φ z = 0 ,
which are called the resolvent equations. For the applications, formulation and numerical methods of the resolvent equations, see (Noor, 1997b, 1998, 2000, 2001a, 2002b, 2003, 2004, 2006a,b, 2009a).

We remark that if φ is the indicator function of a closed convex set K , then J φ P K , the projection of H onto K. Consequently problem (2.13) is equivalent to finding z H such that

(2.15)
Tg - 1 P K z + ρ - 1 Q K z = 0 , Equations of the type (2.15) are known as the general Wiener–Hopf equations, which are mainly due to Noor (1993). For g = I , we obtain the Wiener–Hopf (normal) equations introduced and studied by Shi (1991) in connection with the classical variational inequalities. We would like to mention that the Wiener–Hopf equations technique is being used to develop some implementable and efficient iterative algorithms for solving various classes of variational inequalities and related fields. For the recent state-of-the-art, see (Noor, 2001a, 2004, 2009c) and the references therein.
Definition 2.1

The operator T : H H is said to be

  1. g-monotone, iff Tu - Tv , g ( u ) - g ( v ) 0 , u , v H .

  2. g-pseudomonotone with respect to the function φ ( . ) , iff Tu , g ( v ) - g ( u ) + φ ( g ( v ) ) - φ ( g ( u ) ) 0 implies Tv , g ( v ) - g ( u ) + φ ( g ( v ) ) - φ ( g ( u ) ) 0 u , v H .

    For g I , Definition 2.1 reduces to the usual definition of monotonicity, and pseudomonotonicity of the operator T. Note that monotonicity implies pseudomonotonicity but the converse is not true, see (Alvarez and Attouch, 2001).

3

3 Resolvent-splitting methods

In this section, we use the technique of updating the solution to suggest and analyze a class of some resolvent-splitting methods for solving general variational inequalities (2.1). For this purpose, we need the following result, which can be proved by invoking Lemma 2.1.

Lemma 3.1

Lemma 3.1 Noor, 1988

The function u H is a solution of (2.1) if and only if u H satisfies the relation

(3.1)
g ( u ) = J A [ g ( u ) - ρ Tu ] , where ρ > 0 is a constant.

Lemma 3.1 implies that problems (2.1) and (3.1) are equivalent. This alternative formulation is very important from the theoretical and numerical analysis point of view. This fixed-point formulation has been used to suggest and analyze the following method.
Algorithm 3.1

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme g ( u n + 1 ) = J A [ g ( u n ) - ρ Tu n ] , n = 0 , 1 , 2 Using the technique of Noor et al. (1993), one can prove that Algorithm 3.1 has the local convergence behaviour, which enables us to identify accurately the optimal constraint after finitely many iterations.

It is well known (Noor, 2001a,b) that the convergence of the classical resolvent methods for solving variational inclusions requires that both the operators T and g must be strongly monotone and Lipschitz continuous. These strict conditions rule out many applications of projection type methods. These factors motivated to modify the resolvent type methods by using the technique of updating the solution. Noor (2000, 2004) has used the technique of updating the solution to suggest and analyze a class of resolvent-splitting algorithms for general variational inclusions (2.1). Let g - 1 be the inverse of the operator g . Then Eq. (3.1) can be written in the form: g ( y ) = J A [ g ( u ) - ρ Tu ] g ( w ) = J A [ g ( y ) - ρ Ty ] g ( u ) = J A [ g ( w ) - ρ Tw ] . or equivalently g ( u ) = J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] g ( u ) = ( I + ρ Tg - 1 ) - 1 { J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] + ρ Tg - 1 } g ( u ) . Using Lemma 3.1, one can easily show that u H is a solution of (2.1) if and only if u H satisfies the equation g ( u ) - J A [ g ( w ) - ρ Tw ] = 0 . These modified fixed-point formulations allow us to suggest and analyze the following resolvent-splitting-type methods for solving general variational inclusions (2.1).

Algorithm 3.2

For a given u 0 H , compute the approximate solution u n + 1 by the iterative schemes g ( y n ) = J A [ g ( u n ) - ρ Tu n ] g ( w n ) = J A [ g ( y n ) - ρ Ty n ] g ( u n + 1 ) = J A [ g ( w n ) - ρ Tw n ] , n = 0 , 1 , 2 , Algorithm 3.2 is known as the predictor-corrector method and is due to Moudafi and Thera (1997) and Noor (1998).

Algorithm 3.3

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme g ( u n + 1 ) = J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] g ( u n ) , n = 0 , 1 , 2 which is known as the three-step forward-backward resolvent-splitting algorithm, in which the order of T and J A has not been changed unlike in Glowinski and Tallec (1989). Algorithm 3.3 is similar to that of Glowinski and Tallec (1989), which they suggested by using the Lagrange multipliers method. For the convergence analysis of Algorithms 3.3, see (Glowinski and Tallec, 1989; He and Liao, 2002; Noor, 2001a, 2004).

Algorithm 3.4

For a given u 0 H , compute u n + 1 by the iterative scheme g ( u n + 1 ) = ( I + ρ Tg - 1 ) - 1 { J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] J A [ I - ρ Tg - 1 ] + ρ Tg - 1 } g ( u n ) , n = 0 , 1 , 2 , which is also called the splitting resolvent method. Algorithm 3.4 can be viewed as a generalization of the splitting algorithm of Tseng (2000). Using the technique of Tseng (2000), one can discuss its applications in mathematical programming and optimization.

In this paper, we suggest an other class of self-adaptive resolvent iterative methods with the line search strategy. These methods include some known splitting methods as special cases.

We now define the resolvent residue vector by the relation R ( u ) = g ( u ) - J A [ g ( y ) - ρ Ty ] = g ( u ) - J A [ J A [ g ( u ) - ρ Tu ] - ρ Tg - 1 J A [ g ( u ) - ρ Tu ] ] . From Lemma 3.1, it is clear the u H is a solution of (2.1) if and only if u H is a zero of the equation R ( u ) = 0 . Consider g ( z ) = ( 1 - η ) g ( u ) + η P A [ g ( y ) - ρ Ty ] = g ( u ) - η R ( u ) H . We now prove that the variational inclusion (2.1) is equivalent to the resolvent Eq. (2.11) by invoking Lemma 3.1. and this is the prime motivation of our next result.

Theorem 3.1

The variational inclusion (2.1) has a solution u H if and only if, the resolvent Eq. (2.11) has a solution z H , where

(3.2)
g ( u ) = J A z
(3.3)
z = g ( u ) - ρ Tu ,
where J A is the resolvent operator and ρ > 0 is a constant.

From Theorem 3.1, we conclude that the variational inclusion (2.1) and the resolvent Eq. (2.11) are equivalent. This alternative formulation plays an important and crucial part in suggesting and analyzing various iterative methods for solving variational inclusions and related optimization problems. In this paper, by suitable and appropriate rearrangement, we suggest a number of new iterative methods for solving variational inclusions (2.1).

Using the fact that R A = I - J A , the resolvent Eq. (2.11) can be written as z - J A z + ρ Tg - 1 J A z = 0 . Thus, for a positive stepsize γ , we can write the above equation as g ( u ) = g ( u ) - γ { z - J A z + ρ Tg - 1 J A z } = g ( u ) - γ D ( u ) , where

(3.4)
D ( u ) = z - J A z + ρ Tg - 1 J A z = R ( u ) - ρ Tu + ρ Tg - 1 J A [ g ( u ) - ρ Tu ] . Also, from (3.1), we have
(3.5)
g ( u ) = J A [ g ( u ) - α { η ( g ( u ) - g ( w ) ) + ρ Tz } ] = J A [ g ( u ) - α { η R ( u ) + ρ Tz } ] = J A [ g ( u ) - α d ( u ) ] ,
where
(3.6)
d ( u ) = η R ( u ) + ρ Tz η R ( u ) + ρ Tg - 1 ( g ( u ) - η R ( u ) ) .
This fixed-point formulation enables to suggest and analyze the following method for general variational inclusions (2.1).
Algorithm 3.5

For a given u 0 H , compute the approximate solution u n + 1 by the iterative schemes

  • Predictor step

    g ( y n ) = J A [ g ( u n ) - ρ n Tu n ] g ( w n ) = J A [ g ( y n ) - ρ n Ty n ] g ( z n ) = g ( u n ) - η n R ( u n ) , where η n = a m n , and m n is the smallest nonnegative integer m such that η n ρ n Tu n - Tg - 1 ( g ( u n ) - η n R ( u n ) ) , R ( u n ) σ R ( u n ) 2 , σ [ 0 , 1 ] .

  • Corrector step

    g ( u n + 1 ) = J A [ g ( u n ) - α n d ( u n ) ] , n = 0 , 1 , 2 , d ( u n ) = η n R ( u n ) + ρ n Tz n η n R ( u n ) + ρ n Tg - 1 ( g ( u n ) - η n R ( u n ) ) α n = η n R ( u n ) , D ( u n ) d ( u n ) 2 D ( u n ) = R ( u n ) - ρ n Tu n + ρ Tz n = R ( u n ) - ρ n Tu n + ρ n Tg - 1 ( g ( u n ) - η n R ( u n ) ) , Here α n is the corrector step size which involves the resolvent Eq. (2.11).

    If g I , the identity operator, then Algorithm 3.5 collapses to the following method for solving classical variational inclusions (2.2)

Algorithm 3.6

For a given u 0 H , compute u n + 1 by the iterative schemes

  • Preditor step

    y n = J A [ u n - ρ n Tu n ] w n = J A [ y n - ρ n Ty n ] z n = u n - η n R ( u n ) , where η n satisfies η n ρ n Tu n - T ( u n - η n R ( u n ) ) , R ( u n ) σ R ( u n ) 2 , σ [ 0 , 1 ]

  • Corrector step

    u n + 1 = J A [ u n - α n d 1 ( u n ) ] , n = 0 , 1 , 2 , where d 1 ( u n ) = R ( u n ) + ρ n Tz n R ( u n ) + ρ n T ( u n - η n R ( u n ) ) α n = R ( u n ) , D 1 ( u n ) d 1 ( u n ) 2 d 1 ( u n ) = η n R ( u n ) + ρ n T ( u n - η n R ( u n ) ) D 1 ( u n ) = R ( u n ) - ρ n Tu n + ρ n T ( u n - η n R ( u n ) ) . Here α n is the corrector step size, which involves the resolvent equation.

    For η n = 1 , Algorithm 3.5 becomes:

Algorithm 3.7

For a given u 0 H , compute u n + 1 by the iterative schemes

  • Predictor step

    g ( y n ) = J A [ g ( u n ) - ρ n Tu n ] g ( w n ) = J A [ g ( y n ) - ρ n Ty n ] , where ρ n satisfies ρ n Tu n - ρ n Tw n , R ( u n ) σ R ( u n ) 2 , σ ( 0 , 1 ) .

  • Corrector step

    g ( u n + 1 ) = J A [ g ( u n ) - α n d 2 ( u n ) ] , n = 0 , 1 , 2 , , where d 2 ( u n ) = R ( u n ) + ρ n Tw n α n = R ( u n ) , D 2 ( u n ) d 2 ( u n ) 2 D 2 ( u n ) = R ( u n ) - ρ n Tu n - ρ n Tw n and α n is the corrector step size.

    If A ( . ) φ ( . ) , where φ is the subdifferential of a proper, convex and lower semicontinuous function φ : H R { + } , then J A J φ = ( I + φ ) - 1 , the resolvent operator and consequently Algorithm 3.5 collapses to:

Algorithm 3.8

For a given u 0 H , compute the approximate solution u n + 1 by the iterative schemes

  • Predictor step

    (3.7)
    g ( y n ) = J φ [ g ( u n ) - ρ n Tu n ]
    (3.8)
    g ( w n ) = J φ [ g ( y n ) - ρ n Ty n ]
    (3.9)
    g ( z n ) = g ( u n ) - η n R ( u n ) ,
    where η n = a m n , and m n is the smallest nonnegative integer m such that
    (3.10)
    η n ρ n Tu n - Tg - 1 ( g ( u n ) - η n R ( u n ) ) , R ( u n ) σ R ( u n ) 2 , σ [ 0 , 1 ] .

  • Corrector step

    (3.11)
    g ( u n + 1 ) = J φ [ g ( u n ) - α n d ( u n ) ] , n = 0 , 1 , 2 ,
    (3.12)
    d 3 ( u n ) = η n R ( u n ) + ρ n Tz n η n R ( u n ) + ρ n Tg - 1 ( g ( u n ) - η n R ( u n ) )
    (3.13)
    α n = η n R ( u n ) , D 3 ( u n ) d 3 ( u n ) 2
    (3.14)
    D 3 ( u n ) = R ( u n ) - ρ n Tu n + ρ n Tg - 1 ( g ( u n ) - η n R ( u n ) ) ,
    Here α n is the corrector step size which involves the resolvent equation.

    For α n = 1 , Algorithm 3.5 is exactly the resolvent-splitting Algorithm 3.1, which is due to Moudafi and Thera (1997). For g I , and η n = 1 , one can obtain several new splitting type algorithms for solving classical (quasi) variational inclusions (2.2) and (2.3). This clearly shows that Algorithm 3.5 is unifying ones and includes several new and previously known methods as special cases.

    For the convergence analysis of Algorithm 3.8, we need the following result, which can be proved using the technique of Noor (2001a). For the sake of completeness, we sketch the main ideas.

Theorem 3.1

If u ¯ H , is a solution of (2.1) and the operator T : H H is g-monotone, then

(3.15)
g ( u ) - g ( u ¯ ) , d ( u ) ( η - σ ) R ( u ) 2 , u H .

Theorem 3.2

Let u ¯ H be a solution of (2.1) and let u n + 1 be the approximate solution obtained from Algorithm 3.8. Then

(3.16)
g ( u n + 1 ) - g ( u ¯ ) 2 g ( u n ) - g ( u ¯ ) 2 - ( η n - σ ) 2 R ( u n ) 4 d ( u n ) 2 .

Proof

From (3.15), we have g ( u n + 1 ) - g ( u ¯ ) 2 g ( u n ) - g ( u ¯ ) - α n d 3 ( u n ) 2 g ( u n ) - g ( u ¯ ) 2 - 2 α n g ( u n ) - g ( u ¯ ) , d 3 ( u n ) + α n d 3 ( u n ) 2 g ( u n ) - g ( u ¯ ) 2 - α n R ( u n ) , D 3 ( u n ) g ( u n ) - g ( u ¯ ) 2 - α n ( η n - σ ) R ( u n ) 2 g ( u n ) - g ( u ¯ ) 2 - ( η n - σ ) 2 R ( u n ) 4 d 3 ( u n ) 2 . the required results.  □

Theorem 3.3

Let u ¯ H be a solution of (2.1) and let u n + 1 be the approximate solution obtained from Algorithm 3.8. If H is a finite dimensional space and g is injective, then lim n ( u n ) = u ¯ .

Proof

Let u ¯ be a solution of (2.1). Then, from (3.16), it follows that the sequence { g ( u n ) - g ( u ¯ ) } is nonincreasing and consequently the sequence { g ( u n ) } is bounded. Thus, under the assumptions of g , it follows that the sequence { u n } is bounded. Furthermore, from (3.16), we have n = 0 ( η n - σ ) 2 R ( u n ) 4 d ( u n ) 2 g ( u 0 ) - g ( u ¯ ) 2 , which implies that

(3.17)
lim n R ( u n ) = 0 , or
(3.18)
lim n η n = 0 .
Assume that (3.17) holds. Let u ¯ be the cluster point of { u n } and let the subsequence { u n j } of the sequence of { u n } converge to u ¯ . Since T and g are continuous, it follows that R is continuous and R ( u ¯ ) = lim n j R ( u n j ) = 0 , from which it follows that u ¯ is a solution of (2.1) by invoking Lemma 3.1 and
(3.19)
g ( u n + 1 ) - g ( u ¯ ) 2 g ( u n ) - g ( u ¯ ) 2 .
Thus the sequence { u n } has exactly one cluster point and consequently lim n g ( u n ) = g ( u ¯ ) . Since g is injective, it follows that lim n u n = u ¯ H satisfying the general variational inclusion (2.1).

Assume that (3.18) holds, that is, lim n η n = 0 . If (3.10) does not hold, then by a choice of η n , we obtain

(3.20)
σ R ( u n ) 2 ρ n η n Tu n - Tg - 1 ( g ( u n ) - η n R ( u n ) ) , R ( u n ) . Let u ¯ be a cluster point of { u n } and let { u n j } be the corresponding subsequence of { u n } converging to u ¯ . Taking the limit in (3.20), we have σ R ( u ¯ ) 2 , which implies that R ( u ¯ ) = 0 , that is, u ¯ H is a solution of (2.1) by invoking Lemma 3.1 and (3.19) holds. Repeating the above arguments, we conclude that lim n u n = u ¯ .  □

4

4 Inertial proximal methods

In this section, we consider an inertial proximal methods for solving general variational inclusions (2.1). Inertial proximal methods were introduced and studied by Alvarez (2000) and Alvarez and Attouch (2001) in the context of implicit discretization of the second-order differential equations in time. These inertial type methods include the classical proximal methods as a special case. Noor (2002a) has used the auxiliary principle technique to study the convergence analysis of the proximal-type methods.

We again use the fixed-point formulation (3.1) to suggest and analyze the following iterative method for solving the general variational inclusion (2.1).

Algorithm 4.1

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme g ( u n + 1 ) = J A [ g ( u n ) - ρ Tu n + 1 ] , n = 0 , 1 , 2 , which is known as the implicit resolvent method. For the convergence analysis of Algorithm 4.1, see (Noor, 2001a).

We can rewrite (3.1) in the following equivalent form.

(4.1)
g ( u ) = J A [ g ( u ) - ρ Tu + α ( g ( u ) - g ( u ) ) ] , where ρ > 0 and α > 0 are constants.

This formulation is used to suggest the following iterative method for solving the general variational inclusions (2.1).

Algorithm 4.2

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme g ( u n + 1 ) = J A [ g ( u n ) - ρ Tu n + 1 + α n ( g ( u n ) - g ( u n - 1 ) ) ] , n = 0 , 1 , 2 , If A ( . ) φ ( . ) , where φ is the subdifferential of a proper, convex and lower semicontinuous function φ : H R { + } , then J A J φ = ( I + φ ) - 1 , the resolvent operator and consequently Algorithm 4.2 collapses to:

Algorithm 4.3

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme g ( u n + 1 ) = J φ [ g ( u n ) - ρ Tu n + 1 + α n ( g ( u n ) - g ( u n - 1 ) ) ] , n = 0 , 1 , 2 , Using Lemma 3.1, Algorithm 4.3 can be written as

Algorithm 4.4

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme

(4.2)
ρ Tu n + 1 + g ( u n + 1 ) - g ( u n ) - α n ( g ( u n ) - g ( u n - 1 ) ) , g ( v ) - g ( u n + 1 ) + ρ φ ( g ( v ) ) - ρ φ ( g ( u n + 1 ) ) 0 , v H . If g = I , the identity operator, then Algorithm 4.3 reduces to:

Algorithm 4.5

For a given u 0 H , compute the approximate solution u n + 1 by the iterative scheme ρ Tu n + 1 + u n + 1 - u n - α n ( u n - u n - 1 ) , v - u n + 1 + ρ φ ( v ) - ρ φ ( u n + 1 ) 0 , v H . In a similar way, one can suggest and analyze several iterative methods for solving the general variational inclusions and its related optimization.

For the convergence analysis of Algorithm 4.4, we recall the following well known result.

Lemma 4.1

2 u , v = u + v 2 - u 2 - v 2 , u , v H .

We now study the convergence of Algorithm 4.4. The analysis is in the spirit of Noor (2004) and Alvarez (2000).
Theorem 4.1

Let u ¯ H be a solution of (2.1) and let u n + 1 be the approximate solution obtained from Algorithm 4.4 If the operator T : H H is g-pseudomonotone, then

(4.3)
g ( u n + 1 ) - g ( u ¯ ) 2 g ( u n ) - g ( u ¯ ) 2 + α n { g ( u n ) - g ( u ¯ ) 2 - g ( u ¯ ) - g ( u n - 1 ) 2 + 2 g ( u n ) - g ( u n - 1 ) 2 } - g ( u n + 1 ) - g ( u n ) - α n ( g ( u n ) - g ( u n - 1 ) ) 2 .

Proof

Let u ¯ H be a solution of (2.1). Then T u ¯ , g ( v ) - g ( u ¯ ) + φ ( g ( v ) ) - φ ( g ( u ¯ ) ) 0 , v H , implies that

(4.4)
Tv , g ( v ) - g ( u ¯ ) + φ ( g ( v ) ) - φ ( g ( u ¯ ) ) 0 , v H , since T is g-pseudomonotone.

Taking v = u n + 1 in (4.4), we have

(4.5)
Tu n + 1 , g ( u n + 1 ) - g ( u ¯ ) + φ ( g ( u n + 1 ) ) - φ ( g ( u ¯ ) ) 0 . Setting v = u ¯ in (4.2), we obtain
(4.6)
ρ Tu n + 1 + g ( u n + 1 ) - g ( u n ) - α n { g ( u n ) - g ( u n - 1 ) } , g ( u ¯ ) - g ( u n + 1 ) + ρ φ ( g ( u ¯ ) ) - ρ φ ( g ( u n + 1 ) ) 0 .
Combining Eqs. (4.5) and (4.6), we have
(4.7)
g ( u n + 1 ) - g ( u n ) - α n { g ( u n ) - g ( u n - 1 ) } , g ( u ¯ ) - g ( u n + 1 ) 0 ,
which can be written as
(4.8)
g ( u n + 1 ) - g ( u n ) , g ( u ¯ ) - g ( u n + 1 ) α n g ( u n ) - g ( u n - 1 ) , g ( u ¯ ) - g ( u n ) + g ( u n ) - g ( u n + 1 ) .
Using Lemma 4.1 and rearranging the terms of (4.8), one can easily obtain (4.3), the required result.  □

Theorem 4.2

Let H be a finite dimensional space and g be an injective operator. Let u n + 1 be the approximate solution obtained from Algorithm 4.4 and let u ¯ H be a solution of (2.1). If, there exists α [ 0 , 1 [ such that 0 α n α , n N and

(4.9)
n = 1 α n g ( u n ) - g ( u n - 1 ) 2 , then lim n u n = u ¯ .

Proof

Let u ¯ H be a solution of (2.1). First we consider the case α n = 0 . In this case, from (4.3), we see that the sequence { g ( u n ) - g ( u ¯ ) } is nonincreasing and consequently, the sequence { u n } is bounded. Also, from (4.3), we have n = 0 g ( u n + 1 ) - g ( u n ) 2 g ( u 0 ) - g ( u ¯ ) 2 , which implies that

(4.10)
lim n g ( u n + 1 ) - g ( u n ) = 0 . Let u ˆ be the limit point of { u n } 1 ; a subsequence { u n j } 1 of { u n } 1 converges to u ˆ H . Replacing w n by u n j in (4.2) and taking limit the limit as n j and using (4.10), we have T u ˆ , g ( v ) - g ( u ˆ ) + φ ( g ( v ) ) - φ ( g ( u ˆ ) ) 0 , v H , which implies that u ˆ solves the general mixed variational inequality (2.4) and g ( u n + 1 ) - g ( u ˆ ) 2 g ( u n ) - g ( u ˆ ) 2 . Thus, it follows that from the above inequality that { u n } 1 has exactly one limit point u ˆ and lim n g ( u n ) = g ( u ˆ ) . Since g is injective, thus lim n u n = u ˆ . Now we consider the case α n > 0 . From (4.3), using the technique of (Noor, 2009b,c), one can have n = 1 g ( u n + 1 ) - g ( u n ) - α n ( g ( u n ) - g ( u n - 1 ) ) 2 g ( u 0 ) - g ( u ¯ ) 2 + n = 1 ( α g ( u n ) - g ( u ¯ ) 2 + 2 g ( u n ) - g ( u n - 1 ) 2 ) . which implies that lim n g ( u n + 1 ) - g ( u n ) - α n ( g ( u n ) - g ( u n - 1 ) ) = 0 Repeating the above argument as in the case α n = 0 , one can easily show that lim n u n = u ˆ .  □

Acknowledgement

The authors would like to express their deepest gratitude to Rector, CIIT, Islamabad, Pakistan and President, King Saud University, Riyadh, Saudi Arabia for providing excellent research facilities.

References

  1. , . On the minimization property of a second order dissipative system in Hilbert space. SIAM J. Control Optim.. 2000;38:1102-1119.
    [Google Scholar]
  2. , , . An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator damping. Set-Val. Anal.. 2001;9:3-11.
    [Google Scholar]
  3. , . Operateurs Maximaux Monotone et Semigroups de Contractions dans les Espaces de Hilbert. Amsterdam: North-Holland; .
  4. , . Pseudomonotone variational inequalities: convergence of proximal methods. J. Optim. Theory Appl.. 2001;109:311-326.
    [Google Scholar]
  5. , , . Variational Inequalities and Network Equilibrium Problems. New York: Plenum Press; .
  6. , , , . Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models. Dordrecht, Holland: Kluwer Academic Publishers; .
  7. , , . Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. Philadelphia, PA: SIAM; .
  8. , , , . Convergence analysis and applications of the Glowinski–Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl.. 1998;97:645-673.
    [Google Scholar]
  9. , , . Improvement of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl.. 2002;112:111-128.
    [Google Scholar]
  10. , , . An Introduction to Variational Inequalities and Their Applications. Philadelphia, PA: SIAM; .
  11. , , . Sensitivity analysis of variational inclusions by the Wiener–Hopf equations technique. J. Appl. Math. Stochastic Anal.. 1999;12
    [Google Scholar]
  12. , , . Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl.. 1997;97:425-448.
    [Google Scholar]
  13. , . General variational inequalities. Appl. Math. Lett.. 1988;1:119-121.
    [Google Scholar]
  14. , . Wiener–Hopf equations and variational inequalities. J. Optim. Theory Appl.. 1993;79:197-206.
    [Google Scholar]
  15. , . Some recent advances in variational inequalities. Part I, basic concepts. New Zealand J. Math.. 1997;26:53-80.
    [Google Scholar]
  16. , . Some recent advances in variational inequalities. Part II, other concepts. New Zealand J. Math.. 1997;26:229-255.
    [Google Scholar]
  17. , . Generalized set-valued variational inclusions and resolvent equations. J. Math. Anal. Appl.. 1998;228:206-220.
    [Google Scholar]
  18. , . New approximation schemes for general variational inequalities. J. Math. Anal. Appl.. 2000;251:217-229.
    [Google Scholar]
  19. , . Resolvent equations technique for general variational inclusions. Nonlin. Anal. Forim.. 2001;6:171-184.
    [Google Scholar]
  20. , . Three-step iterative algorithms for multivalued quasi variational inclusions. J. Math. Anal. Appl.. 2001;255:589-604.
    [Google Scholar]
  21. , . Proximal methods for mixed variational inequalities. J. Optim. Theory Appl.. 2002;115:447-452.
    [Google Scholar]
  22. , . Projection-splitting methods for general variational inequalities. J. Comput. Anal. Appl.. 2002;4:47-61.
    [Google Scholar]
  23. , . New extragradient-type methods for general variational inequalities. J. Math. Anal. Appl.. 2003;277:379-395.
    [Google Scholar]
  24. , . Some developments in general variational inequalities. Appl. Math. Comput.. 2004;152:199-277.
    [Google Scholar]
  25. , . Merit functions for general variational inequalities. J. Math. Anal. Appl.. 2006;316:736-752.
    [Google Scholar]
  26. , . Projection-proximal methods for general variational inequalities. J. Math. Anal. Appl.. 2006;318:53-62.
    [Google Scholar]
  27. , . Iterative methods for general nonconvex variational inequalities. Albanian J. Math.. 2009;3:117-127.
    [Google Scholar]
  28. , . Some classes of general nonconvex variational inequalities. Albanian J. Math.. 2009;3:175-188.
    [Google Scholar]
  29. , . Implicit iterative methods for nonconvex variational inequalities. J. Optim. Theory Appl.. 2009;143:619-624.
    [Google Scholar]
  30. , . An extragradient algorithm for solving general nonconvex variational inequalities. Appl. Math. Lett.. 2010;23:917-921.
    [Google Scholar]
  31. , . Nonconvex quasi variational inequalities. J. Adv. Math. Stud.. 2010;3:59-72.
    [Google Scholar]
  32. , . Auxiliary principle technique for solving general mixed variational inequalities. J. Adv. Math. Stud.. 2010;3(2)
    [Google Scholar]
  33. , . On an implicit method for nonconvex variational inequalities. J. Optim. Theory Appl.. 2010;147
    [Google Scholar]
  34. , , . Self-adaptive projection methods for general variational inequalities. Appl. Math. Comput.. 2004;154:659-670.
    [Google Scholar]
  35. , , . New system of general nonconvex variational inequalities. Appl. Math. E-Notes. 2010;10:76-85.
    [Google Scholar]
  36. , , . A class of projection methods for general variational inequalities. J. Math. Anal. Appl.. 2002;268:334-343.
    [Google Scholar]
  37. , , , . Some aspects of variational inequalities. J. Comput. Appl. Math.. 1993;47:285-312.
    [Google Scholar]
  38. , . Nonlinear Programming and Variational Inequalities: A Unified Approach. Dordrecht, Holland: Kluwer Academic Publishers; .
  39. , . Equivalence of variational inequalities with Wiener–Hopf equations. Proc. Am. Math. Soc.. 1991;111:339-346.
    [Google Scholar]
  40. , . Formes bilineaires coercivites sur les ensembles convexes. C.R. Acad. Sci. Paris. 1964;258:4413-4416.
    [Google Scholar]
  41. , . A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim.. 2000;38:431-446.
    [Google Scholar]
  42. , . Strongly nonlinear generalized equations. J. Math. Anal. Appl.. 1998;220:65-76.
    [Google Scholar]
  43. , , , . Tangent projection equations and general variational inequalities. J. Math. Anal. Appl.. 2001;258:755-762.
    [Google Scholar]
Show Sections