Translate this page into:
Some resolvent methods for general variational inclusions
*Corresponding author at: Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan. Tel.: +92 23454027532 noormaslam@hotmail.com (Muhammad Aslam Noor),
-
Received: ,
Accepted: ,
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.
Available online 20 June 2010
Abstract
In this paper, we consider and analyze some classes of resolvent-splitting methods for solving the general variational inclusions using the technique of updating the solution. These resolvent-splitting methods are self-adaptive-type methods, where the corrector step size involves the resolvent equation. We prove that the convergence of these new methods only require the pseudomonotonicity, which is a weaker condition than monotonicity. These new methods differ from the previously known splitting and inertial proximal methods for solving the general variational inclusions and related complementarity problems. The proposed methods include several new and known methods as special cases. Our results may be viewed as refinement and improvement of the previous known methods.
Keywords
Variational inequalities
Splitting methods
Inertial proximal method
Auxiliary principle
Fixed-point
Convergence
Introduction
Variational inequalities theory has played a significant and fundamental role in the development of new and innovative techniques for solving complex and complicated problems arising in pure and applied sciences, see (Alvarez, 2000; Alvarez and Attouch, 2001; Brezis, 1973; El Farouq, 2001; Giannessi and Maugeri, 1995; Giannessi et al., 2001; Glowinski and Tallec, 1989; Haugruge et al., 1998; He and Liao, 2002; Kinderlehrer and Stampacchia, 2000; Moudafi and Noor, 1999; Moudafi and Thera, 1997; Noor, 1988, 1993, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a–d; Noor and Noor, 2010, 2004; Noor et al., 1993; Noor and Rassias, 2002; Patriksson, 1998; Shi, 1991; Stampacchia, 1964; Tseng, 2000; Uko, 1998; Xiu et al., 2001). Variational inequalities have been extended and generalized in various directions using novel and innovative techniques. A useful and important generalization is called the general variational inclusion involving the sum of two nonlinear operators T and A. Moudafi and Noor (1999) studied the sensitivity analysis of variational inclusions by using the technique of the resolvent equations. Recently much attention has been given to develop iterative algorithms for solving the variational inclusions. It is known that such algorithms require an evaluation of the resolvent operator of the type . The main difficulty with such problems is that the resolvent operator may be hard to invert. This difficulty has been overcome by using the resolvent operators and separately rather than . Such a technique is called the splitting method. These methods for solving variational inclusions have been studied extensively, see, for example (Glowinski and Tallec, 1989; Moudafi and Thera, 1997; Noor, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a) and the references therein. In the context of the mixed variational inequalities (variational inclusions), Noor (2000, 2001a, 2002b, 2003, 2004) has used the resolvent operator and resolvent equations techniques to suggest and analyze a number of resolvent type iterative methods. A useful feature of these splitting methods is that the resolvent step involves the subdifferential of the proper, convex and lower-semicontinuos function only and the other part facilitates the problem decomposition.
Noor (1998) introduced and considered the general variational inclusion, which is an important and significant generalization of variational inequalities. It turned out that a wide class of nonsymmetric, odd-order free, moving, unilateral and equilibrium problems arising in elasticity, transportation, circuit analysis, oceanography, nonlinear optimization, finance, economics and operations research can be studied via general variational inclusions, see (Alvarez, 2000; Alvarez and Attouch, 2001; Brezis, 1973; El Farouq, 2001; Giannessi and Maugeri, 1995; Giannessi et al., 2001; Glowinski and Tallec, 1989; Haugruge et al., 1998; He and Liao, 2002; Kinderlehrer and Stampacchia, 2000; Moudafi and Noor, 1999; Moudafi and Thera, 1997; Noor, 1988, 1993, 1997a,b, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a–d; Noor and Noor, 2010, 2004; Noor et al., 1993; Noor and Rassias, 2002; Patriksson, 1998; Shi, 1991; Stampacchia, 1964; Tseng, 2000; Uko, 1998; Xiu et al., 2001). Variational inclusion theory is experiencing an explosive growth in both theory and applications: as consequence, several numerical techniques including resolvent operator, resolvent equations, auxiliary principle, decomposition and descent are being developed for solving various classes of variational inclusions and related optimization problems. Resolvent methods and its variants forms including the resolvent equations represent important tools for finding the approximate solution of variational inclusions. The main idea in this technique is to establish the equivalence between the variational inclusions and the fixed-point problem by using the concept of resolvent operator. This alternative formulation has played a significant part in developing various resolvent methods for solving variational inclusions. It is well known that the convergence of the resolvent methods requires that the operator must be strongly monotone and Lipschitz continuous. Unfortunately these strict conditions rule out many applications of this method. This fact motivated to modify the resolvent method or to develop other methods. The extragradient method overcome this difficulty by performing an additional forward step and a projection at each iteration according to the double resolvent. This method can be viewed as predictor-corrector method. Its convergence requires that a solution exists and the monotone operator is Lipschitz continuous. When the operator is not Lipschitz continuous or when the Lipschitz continuous constant is not known, the extraresolvent method and its variant forms require an Armijo-like line search procedure to compute the step size with a new projection need for each trial, which leads to expansive computations. To overcomes these difficulties, several modified resolvent and extraresolvent-type methods have been suggested and developed for solving variational inequalities, see (Noor, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004, 2006a,b, 2009a,b,c, 2010a) and the references therein. Glowinski and Tallec (1989) has suggested and analyzed some three-step splitting methods for solving variational inclusions problems by using the Lagrange multipliers technique. They have shown that three-step splitting are numerically more efficient as compared with one-step and two-step splitting methods. They have studied the convergence of these splitting methods under the assumption that the underlying operator is monotone and Lipschitz continuous. Noor (2004) has suggested some three-step projection-splitting methods for various classes of variational inequalities and variational inclusions using the technique of updating the solution, in which the order of T and resolvent operator associated with the maximal monotone operator has not been changed. These three-step splitting methods are compatible with the three-step splitting methods of Glowinski and Tallec (1989). For the applications and convergence analysis of three-step splitting method, see (Glowinski and Tallec, 1989; He and Liao, 2002; Moudafi and Thera, 1997; Noor, 1998, 2000, 2001a,b, 2002a,b, 2003, 2004) and the references therein.
In this paper, we suggest and analyze a class of self-adaptive resolvent methods by modifying the fixed-point equations involving a generalized residue vector associated with the variational inclusions. These methods are simple and robust. The searching direction in these methods is a combination of the generalized resolvent residue and the modified extraresolvent direction. These new methods are different from the existing one-step, two-step and three-step projection-splitting methods. We prove that the convergence of the proposed methods only requires the pseudomonotonicity, which is weaker condition than monotonicity.
Noor (2004) and El Farouq (2001) has used the auxiliary principle technique to suggest and analyze a class of proximal (implicit) methods. Alvarez (2000) and Alvarez and Attouch (2001) have considered an inertial proximal method for maximal monotone operators via the discretization of a second order differential equation in time, which includes the classical proximal method. We again use the equivalent fixed-point formulation of the variational inclusions to suggest an inertial proximal method for general variational inclusions. We show that the convergence of the inertial proximal method requires the pseudomonotonicity, which is a weaker condition than monotonicity. Thus it is clear that our results improve the convergence criteria of the inertial proximal methods of Alvarez and Attouch (2001). Our proof of convergence is very simple as compared with other methods. Since general variational inclusions include classical variational inclusions and general (quasi) complementarity problems as special cases, results obtained in this paper continue to hold for these problems. The comparison of these methods with the existing ones is an interesting problem for future research work.
Preliminaries
Let H be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let K be a closed convex set in H and be nonlinear operators. Let be a proper, convex and lower semicontinuous function.
For given nonlinear operators
, and a maximal monotone operator
consider the problem of finding
such that
Note that if
, the identity operator, then problem (2.1) is equivalent to finding
such that
If
, where
is the subdifferential of a proper, convex and lower semicontinuous function
, then problem (2.1) reduces to finding
such that
We remark that if
, the identity operator, then the problem (2.4) is equivalent to finding
such that
We note that if
is the indicator function of a closed convex set K in H, that is,
then the general mixed variational inequality (2.4) is equivalent to finding
such that
For
where I is the identity operator, problem (2.6) is equivalent to finding
such that
From now onward, we assume that g is onto K unless, otherwise specified.
If for all } is a normal cone to the convex set K at u, then the general variational inequality (2.6) is equivalent to finding such that which are known as the general nonlinear equations, see (Uko, 1998).
If is the projection of at then it has been shown that the general variational inequality problem (2.6) is equivalent to finding such that which are known as the tangent projection equations, see (Xiu et al., 2001). This equivalence has been used to discuss the local convergence analysis of a wide class of iterative methods for solving general variational inequalities (2.6).
If
is a polar (dual) cone of a convex cone K in H, then problem (2.1) is equivalent to finding
such that
We now recall the following well known result and concepts.
Brezis, 1973
If A is a maximal monotone operator on then, for a constant the resolvent operator associated with A is defined by where I is the identity operator. It is well known that a monotone operator is maximal if and only if its resolvent operator is defined everywhere. In addition, the resolvent operator is a single-valued and nonexpansive, that is,
It is well known that the subdifferential of a proper, convex and lower semicontinuous function is a maximal monotone operator, we denote by the resolvent operator associated with , which is defined everywhere on H.
Brezis, 1973
For a given
,
satisfies the inequality
We now introduce the general resolvent equations. To be more precise, let
, where I is the identity operator and
is the resolvent operator. Let
be an invertible operator. For given nonlinear operators
, consider the problem of finding
such that
If
then general resolvent Eq. (2.11) collapse to finding
such that
If
where
is the subdifferential of a proper, convex and lower semicontinuous function
, then general resolvent Eq. (2.11) are equivalent to finding
such that
We remark that if
is the indicator function of a closed convex set
then
, the projection of H onto K. Consequently problem (2.13) is equivalent to finding
such that
The operator is said to be
-
g-monotone, iff
-
g-pseudomonotone with respect to the function iff
For Definition 2.1 reduces to the usual definition of monotonicity, and pseudomonotonicity of the operator T. Note that monotonicity implies pseudomonotonicity but the converse is not true, see (Alvarez and Attouch, 2001).
Resolvent-splitting methods
In this section, we use the technique of updating the solution to suggest and analyze a class of some resolvent-splitting methods for solving general variational inequalities (2.1). For this purpose, we need the following result, which can be proved by invoking Lemma 2.1.
Noor, 1988
The function
is a solution of (2.1) if and only if
satisfies the relation
For a given compute the approximate solution by the iterative scheme Using the technique of Noor et al. (1993), one can prove that Algorithm 3.1 has the local convergence behaviour, which enables us to identify accurately the optimal constraint after finitely many iterations.
It is well known (Noor, 2001a,b) that the convergence of the classical resolvent methods for solving variational inclusions requires that both the operators T and g must be strongly monotone and Lipschitz continuous. These strict conditions rule out many applications of projection type methods. These factors motivated to modify the resolvent type methods by using the technique of updating the solution. Noor (2000, 2004) has used the technique of updating the solution to suggest and analyze a class of resolvent-splitting algorithms for general variational inclusions (2.1). Let be the inverse of the operator Then Eq. (3.1) can be written in the form: or equivalently Using Lemma 3.1, one can easily show that is a solution of (2.1) if and only if satisfies the equation These modified fixed-point formulations allow us to suggest and analyze the following resolvent-splitting-type methods for solving general variational inclusions (2.1).
For a given compute the approximate solution by the iterative schemes Algorithm 3.2 is known as the predictor-corrector method and is due to Moudafi and Thera (1997) and Noor (1998).
For a given compute the approximate solution by the iterative scheme which is known as the three-step forward-backward resolvent-splitting algorithm, in which the order of T and has not been changed unlike in Glowinski and Tallec (1989). Algorithm 3.3 is similar to that of Glowinski and Tallec (1989), which they suggested by using the Lagrange multipliers method. For the convergence analysis of Algorithms 3.3, see (Glowinski and Tallec, 1989; He and Liao, 2002; Noor, 2001a, 2004).
For a given compute by the iterative scheme which is also called the splitting resolvent method. Algorithm 3.4 can be viewed as a generalization of the splitting algorithm of Tseng (2000). Using the technique of Tseng (2000), one can discuss its applications in mathematical programming and optimization.
In this paper, we suggest an other class of self-adaptive resolvent iterative methods with the line search strategy. These methods include some known splitting methods as special cases.
We now define the resolvent residue vector by the relation From Lemma 3.1, it is clear the is a solution of (2.1) if and only if is a zero of the equation Consider We now prove that the variational inclusion (2.1) is equivalent to the resolvent Eq. (2.11) by invoking Lemma 3.1. and this is the prime motivation of our next result.
The variational inclusion (2.1) has a solution
if and only if, the resolvent Eq. (2.11) has a solution
, where
From Theorem 3.1, we conclude that the variational inclusion (2.1) and the resolvent Eq. (2.11) are equivalent. This alternative formulation plays an important and crucial part in suggesting and analyzing various iterative methods for solving variational inclusions and related optimization problems. In this paper, by suitable and appropriate rearrangement, we suggest a number of new iterative methods for solving variational inclusions (2.1).
Using the fact that
, the resolvent Eq. (2.11) can be written as
Thus, for a positive stepsize
, we can write the above equation as
where
For a given compute the approximate solution by the iterative schemes
-
Predictor step
where , and is the smallest nonnegative integer m such that
-
Corrector step
Here is the corrector step size which involves the resolvent Eq. (2.11).
If the identity operator, then Algorithm 3.5 collapses to the following method for solving classical variational inclusions (2.2)
For a given compute by the iterative schemes
-
Preditor step
where satisfies
-
Corrector step
where Here is the corrector step size, which involves the resolvent equation.
For , Algorithm 3.5 becomes:
For a given compute by the iterative schemes
-
Predictor step
where satisfies
-
Corrector step
where and is the corrector step size.
If , where is the subdifferential of a proper, convex and lower semicontinuous function , then , the resolvent operator and consequently Algorithm 3.5 collapses to:
For a given compute the approximate solution by the iterative schemes
-
Predictor step(3.7)(3.8)(3.9)where and is the smallest nonnegative integer m such that(3.10)
-
Corrector step(3.11)(3.12)(3.13)(3.14)Here is the corrector step size which involves the resolvent equation.
For Algorithm 3.5 is exactly the resolvent-splitting Algorithm 3.1, which is due to Moudafi and Thera (1997). For and one can obtain several new splitting type algorithms for solving classical (quasi) variational inclusions (2.2) and (2.3). This clearly shows that Algorithm 3.5 is unifying ones and includes several new and previously known methods as special cases.
For the convergence analysis of Algorithm 3.8, we need the following result, which can be proved using the technique of Noor (2001a). For the sake of completeness, we sketch the main ideas.
If
is a solution of (2.1) and the operator
is g-monotone, then
Let
be a solution of (2.1) and let
be the approximate solution obtained from Algorithm 3.8. Then
From (3.15), we have the required results. □
Let be a solution of (2.1) and let be the approximate solution obtained from Algorithm 3.8. If H is a finite dimensional space and g is injective, then
Let
be a solution of (2.1). Then, from (3.16), it follows that the sequence
is nonincreasing and consequently the sequence
is bounded. Thus, under the assumptions of
, it follows that the sequence
is bounded. Furthermore, from (3.16), we have
which implies that
Assume that (3.18) holds, that is,
If (3.10) does not hold, then by a choice of
, we obtain
Inertial proximal methods
In this section, we consider an inertial proximal methods for solving general variational inclusions (2.1). Inertial proximal methods were introduced and studied by Alvarez (2000) and Alvarez and Attouch (2001) in the context of implicit discretization of the second-order differential equations in time. These inertial type methods include the classical proximal methods as a special case. Noor (2002a) has used the auxiliary principle technique to study the convergence analysis of the proximal-type methods.
We again use the fixed-point formulation (3.1) to suggest and analyze the following iterative method for solving the general variational inclusion (2.1).
For a given compute the approximate solution by the iterative scheme which is known as the implicit resolvent method. For the convergence analysis of Algorithm 4.1, see (Noor, 2001a).
We can rewrite (3.1) in the following equivalent form.
This formulation is used to suggest the following iterative method for solving the general variational inclusions (2.1).
For a given compute the approximate solution by the iterative scheme If , where is the subdifferential of a proper, convex and lower semicontinuous function , then , the resolvent operator and consequently Algorithm 4.2 collapses to:
For a given compute the approximate solution by the iterative scheme Using Lemma 3.1, Algorithm 4.3 can be written as
For a given
compute the approximate solution
by the iterative scheme
For a given compute the approximate solution by the iterative scheme In a similar way, one can suggest and analyze several iterative methods for solving the general variational inclusions and its related optimization.
For the convergence analysis of Algorithm 4.4, we recall the following well known result.
Let
be a solution of (2.1) and let
be the approximate solution obtained from Algorithm 4.4 If the operator
is g-pseudomonotone, then
Let
be a solution of (2.1). Then
implies that
Taking
in (4.4), we have
Let H be a finite dimensional space and g be an injective operator. Let
be the approximate solution obtained from Algorithm 4.4 and let
be a solution of (2.1). If, there exists
such that
and
Let
be a solution of (2.1). First we consider the case
. In this case, from (4.3), we see that the sequence
is nonincreasing and consequently, the sequence
is bounded. Also, from (4.3), we have
which implies that
Acknowledgement
The authors would like to express their deepest gratitude to Rector, CIIT, Islamabad, Pakistan and President, King Saud University, Riyadh, Saudi Arabia for providing excellent research facilities.
References
- On the minimization property of a second order dissipative system in Hilbert space. SIAM J. Control Optim.. 2000;38:1102-1119.
- [Google Scholar]
- An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator damping. Set-Val. Anal.. 2001;9:3-11.
- [Google Scholar]
- Operateurs Maximaux Monotone et Semigroups de Contractions dans les Espaces de Hilbert. Amsterdam: North-Holland; 1973.
- Pseudomonotone variational inequalities: convergence of proximal methods. J. Optim. Theory Appl.. 2001;109:311-326.
- [Google Scholar]
- Variational Inequalities and Network Equilibrium Problems. New York: Plenum Press; 1995.
- Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models. Dordrecht, Holland: Kluwer Academic Publishers; 2001.
- Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. Philadelphia, PA: SIAM; 1989.
- Convergence analysis and applications of the Glowinski–Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl.. 1998;97:645-673.
- [Google Scholar]
- Improvement of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl.. 2002;112:111-128.
- [Google Scholar]
- An Introduction to Variational Inequalities and Their Applications. Philadelphia, PA: SIAM; 2000.
- Sensitivity analysis of variational inclusions by the Wiener–Hopf equations technique. J. Appl. Math. Stochastic Anal.. 1999;12
- [Google Scholar]
- Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl.. 1997;97:425-448.
- [Google Scholar]
- Wiener–Hopf equations and variational inequalities. J. Optim. Theory Appl.. 1993;79:197-206.
- [Google Scholar]
- Some recent advances in variational inequalities. Part I, basic concepts. New Zealand J. Math.. 1997;26:53-80.
- [Google Scholar]
- Some recent advances in variational inequalities. Part II, other concepts. New Zealand J. Math.. 1997;26:229-255.
- [Google Scholar]
- Generalized set-valued variational inclusions and resolvent equations. J. Math. Anal. Appl.. 1998;228:206-220.
- [Google Scholar]
- New approximation schemes for general variational inequalities. J. Math. Anal. Appl.. 2000;251:217-229.
- [Google Scholar]
- Resolvent equations technique for general variational inclusions. Nonlin. Anal. Forim.. 2001;6:171-184.
- [Google Scholar]
- Three-step iterative algorithms for multivalued quasi variational inclusions. J. Math. Anal. Appl.. 2001;255:589-604.
- [Google Scholar]
- Proximal methods for mixed variational inequalities. J. Optim. Theory Appl.. 2002;115:447-452.
- [Google Scholar]
- Projection-splitting methods for general variational inequalities. J. Comput. Anal. Appl.. 2002;4:47-61.
- [Google Scholar]
- New extragradient-type methods for general variational inequalities. J. Math. Anal. Appl.. 2003;277:379-395.
- [Google Scholar]
- Some developments in general variational inequalities. Appl. Math. Comput.. 2004;152:199-277.
- [Google Scholar]
- Merit functions for general variational inequalities. J. Math. Anal. Appl.. 2006;316:736-752.
- [Google Scholar]
- Projection-proximal methods for general variational inequalities. J. Math. Anal. Appl.. 2006;318:53-62.
- [Google Scholar]
- Iterative methods for general nonconvex variational inequalities. Albanian J. Math.. 2009;3:117-127.
- [Google Scholar]
- Some classes of general nonconvex variational inequalities. Albanian J. Math.. 2009;3:175-188.
- [Google Scholar]
- Implicit iterative methods for nonconvex variational inequalities. J. Optim. Theory Appl.. 2009;143:619-624.
- [Google Scholar]
- An extragradient algorithm for solving general nonconvex variational inequalities. Appl. Math. Lett.. 2010;23:917-921.
- [Google Scholar]
- Auxiliary principle technique for solving general mixed variational inequalities. J. Adv. Math. Stud.. 2010;3(2)
- [Google Scholar]
- On an implicit method for nonconvex variational inequalities. J. Optim. Theory Appl.. 2010;147
- [Google Scholar]
- Self-adaptive projection methods for general variational inequalities. Appl. Math. Comput.. 2004;154:659-670.
- [Google Scholar]
- New system of general nonconvex variational inequalities. Appl. Math. E-Notes. 2010;10:76-85.
- [Google Scholar]
- A class of projection methods for general variational inequalities. J. Math. Anal. Appl.. 2002;268:334-343.
- [Google Scholar]
- Nonlinear Programming and Variational Inequalities: A Unified Approach. Dordrecht, Holland: Kluwer Academic Publishers; 1998.
- Equivalence of variational inequalities with Wiener–Hopf equations. Proc. Am. Math. Soc.. 1991;111:339-346.
- [Google Scholar]
- Formes bilineaires coercivites sur les ensembles convexes. C.R. Acad. Sci. Paris. 1964;258:4413-4416.
- [Google Scholar]
- A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim.. 2000;38:431-446.
- [Google Scholar]
- Tangent projection equations and general variational inequalities. J. Math. Anal. Appl.. 2001;258:755-762.
- [Google Scholar]