7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
View/Download PDF

Translate this page into:

Original article
02 2022
:35;
102502
doi:
10.1016/j.jksus.2022.102502

Interval linear algebra – A new perspective

Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, Tamil Nadu, India

⁎Corresponding author. ganesank@srmist.edu.in (K. Ganesan)

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Peer review under responsibility of King Saud University.

Abstract

The purpose of this article is to put the concept of interval linear algebra on a sound algebraic setting, by judiciously defining a Field and a Vector Space over a field involving equivalence classes. Classifying mathematical objects by bringing them into canonical forms, is an important subject of study in every branch of Mathematics. The concepts of eigenvalues and eigenvectors are used in several fields including machine learning, quantum computing, communication system design, construction design, electrical and mechanical engineering etc. The study of interval matrices leads to canonical forms of interval matrices, which will help us classify and study interval matrices more effectively and more easily.

Keywords

Interval
Interval matrix
Equivalance relation
Eigenvalues
Eigenvectors
Canonical forms
1

1 Introduction

A real life problem can be formulated as a mathematical model involving matrices, system of linear and nonlinear equations, ordinary and partial differential equations and linear and nonlinear programming problems etc. When the decision parameters of the model are well-known precisely, the classical mathematical techniques and methods are used successfully for solving such problems. But, in real world situations, the decision parameters are unlikely to be certain due to many reasons like errors in data, inadequate information, errors in formulation of mathematical models, changes in the parameters of the system, computational errors etc. Hence, in order to model the real-world problems and perform computations, the uncertainty and inexactness must be dealt with. We cannot successfully use traditional classical mathematical techniques for modeling real world problems. However, probability theory, theory of fuzzy sets, theory of interval mathematics and theory of rough sets etc deal with the uncertainty and inexactness successfully. The uncertainties in a problem have to be represented suitably so that their effects on present decision making can be properly taken into account. In fuzzy approach, the membership function of fuzzy parameters and in stochastic approach, the probability distribution of decision parameters have to be made known. But, in real life situations, it is not simple to define the membership function or probability distribution in an inexact environment. Therefore, the use of intervals is more appropriate to an inexact environment. Intervals are efficient and reliable tools allowing us to handle such problems effectively.

The role of matrices is all encompassing and ubiquitous in all disciplines. Hansen and Smith (1967) initiated the importance of interval arithmetic in matrix computations. Motivated by this, many authors Alefeld and Herzberger (1983), Deif (1991a), Ganesan and Veeramani (2005), Ganesan (2007), Goze (2010), Sengupta and Pal (2000), Jaulin et al. (2003), Neumaier (1990), Singh and Gupta (2015) etc have studied interval matrices. Hansen (1969, 1992), Hansen and Walster (2004) discussed the solution of linear algebraic equations with interval coefficients and the global optimization using interval analysis.

Deif (1991a) introduced the characterization of the set of eigenvalues of a general interval matrix and obtained the upper and lower bounds of the eigenvalues. Ganesan and Veeramani (2005) proposed a new type of arithmetic operations on interval numbers. Ganesan (2007) studied some important properties of interval matrices. Sunaga (2009) investigated the algebraic properties of intervals, interval functions and functionals and their derivatives. Singh and Gupta (2015) applied the interval extension of the single step eigen perturbation method and obtained sharp eigenvalue bounds for real symmetric interval matrices by solving the standard interval eigenvalue problem. The deviation amplitude of the interval matrix is considered as a perturbation around the nominal value of the interval matrix. Nerantzis and Adjiman (2017) have presented a branch-and-bound algorithm for calculating bounds on all individual eigenvalues of a symmetric interval matrix. Qiu and Wang (2005) evaluates the upper and lower bounds on the eigenvalues of the standard eigenvalue problem of structures subject to severely deficient information about the structural parameters. Gavalec et al. (2020) have studied three types of interval eigenvectors namely the strong, the strongly universal, and the weak interval eigenvector of an interval matrix in max-plus algebra. Sit (2021) studied some properties of interval matrices. The author also established some theoretical results on the regularity and the singularity of an interval matrix using eigenvalues of the interval matrix. Qiu et al. (1996, 2001, 2005) have studied eigenvalue problems involving uncertain but non-random interval stiffness and mass matrices. They proposed an approximate method for estimating the bounds of eigenvalues of the standard interval eigenvalue problem of the real non-symmetric interval matrices and also the eigenvalue bounds of structures with uncertain-but-bounded parameters. Also, several authors (Deif, 1991b, Hertz, 1992, Kolev, 2006, Leng and He, 2007, Rump and Zemke, 2004, Wilkinson, 1961, Yamamoto, 2001, Yuan et al., 2008) contributed to the study of interval eigenvalue problems.

In this article, we introduce new pairing techniques and arithmetic operations on intervals in IR and interval matrices in IR m × n . This paper sets the tone for canonicalization of interval matrices by determining the most important characteristics of an interval matrix namely, interval eigenvalues and interval eigenvectors. The specialty of the approach mentioned in this paper is that the interval eigenvalues and normalized interval eigenvectors are uniquely determined. We discuss an electric circuit problem under uncertainty described by a system of interval linear differential equations. Numerical examples are provided to illustrate theory developed in this paper.

This paper is arranged as follows: Section 1 is the introduction part. Section 2 provides some basic necessary preliminaries and results about closed intervals on the real line. Section 3 contains the main work on interval linear algebra and in Section 4, some examples are discussed using the proposed method. Section 5 deals with a real world application of the proposed method on electric circuit theory under uncertain environment which is modeled as a system of interval linear differential equations. Finally, a brief conclusion is given in Section 6.

2

2 Preliminaries

Let IR = a = a L , a U : a L a U and   a L , a U R be the set of all closed and bounded intervals. If a L = a U , then a ̃ is a degenerate interval.

The intervals are identified with an ordered pair m , w defined as follows: Let a ̃ = [ a L , a U ] R . Define m a = a L + a U 2 and w a = a U - a L 2 and hence a ̃ can be uniquely expressed as m ( a ̃ ) , w ( a ̃ ) . Conversly, when m ( a ̃ ) , w ( a ̃ ) is known, then m ( a ̃ ) - w ( a ̃ ) = a L and m ( a ̃ ) + w ( a ̃ ) = a U of [ a L , a U ] and hence given m ( a ̃ ) , w ( a ̃ ) , the interval [ a L , a U ] is unique.

Note: If m ( a ̃ ) = 0 , then a ̃ is said to be a zero interval. In particular, if m ( a ̃ ) = 0 and w ( a ̃ ) = 0 , then a ̃ = [ 0 , 0 ] . If m ( a ̃ ) = 0 and w ( a ̃ ) 0 , then a ̃ 0 ̃ . It is to be noted that if a ̃ = [ 0 , 0 ] , then a ̃ 0 ̃ , but the converse need not be true. If a 0 , then a ̃ is said to be a non-zero interval. If m ( a ̃ ) > 0 then a ̃ is said to be a positive interval and is denoted by a ̃ 0 ̃ .

2.1

2.1 Arithmetic operations on intervals

We define a new type of arithmetic operations on intervals through their mid points and widths. Mid points follows the usual arithmetic on R and the widths are following the lattice rule which is least upper bound and greatest lower bound in the lattice L. That is for a , b L , a b = max a , b and a b = min { a , b } .

For any two intervals a ̃ = m ( a ̃ ) , w ( a ̃ ) , b = m ( b ) , w ( b ) and { + , - , × , ÷ } , the arithmetic operations are defined as a ̃ b ̃ = m ( a ̃ ) , w ( a ̃ ) m ( b ̃ ) , w ( b ̃ ) = m ( a ̃ ) m ( b ̃ ) , w ( a ̃ ) w ( b ̃ ) = m ( a ̃ ) m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) } In particular for a ̃ = m ( a ̃ ) , w ( a ̃ ) and b ̃ = m ( b ̃ ) , w ( b ̃ ) in IR , We have

  1. Addition: a ̃ + b ̃ = m ( a ̃ ) + m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) }

  2. Subtraction: a ̃ - b ̃ = m ( a ̃ ) - m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) }

  3. Multiplication: a ̃ × b ̃ = m ( a ̃ ) × m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) }

  4. Division: a ̃ ÷ b ̃ = m ( a ̃ ) ÷ m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) } , provided m ( b ̃ ) 0 .

Proposition 1

Ganesan and Veeramani (2005) For any two a ̃ = m ( a ̃ ) , w ( a ̃ ) and b ̃ = m ( b ̃ ) , w ( b ̃ ) in IR , we have

( i ) . m ( a ̃ + b ̃ ) = m ( a ̃ ) + m ( b ̃ ) and w ( a ̃ + b ̃ ) = w ( a ̃ ) + w ( b ̃ ) .

( ii ) . m ( a ̃ - b ̃ ) = m ( a ̃ ) - m ( b ̃ ) and w ( a ̃ - b ̃ ) = w ( a ̃ ) + w ( b ̃ ) .

( iii ) . m ( a ̃ . b ̃ ) = m ( a ̃ ) . m ( b ̃ ) and w ( a ̃ . b ̃ ) = 0 if w ( a ̃ ) = 0 or w ( b ̃ ) = 0 .

iv . m α a + β b = α m a + β m b and w ( α a ̃ + β b ̃ ) = | α | w ( a ̃ ) + | β | w ( b ̃ ) , where α , β R .

2.2

2.2 Exponentiation of intervals

In certain applications, the exponentiation of intervals play a key role in arriving at the solution. So we can define the exponential of intervals as follows.

Definition 1

For any interval a ̃ IR , we define e a ̃ = e m ( a ̃ ) , w ( a ̃ ) .

Result. For any two intervals a ̃ , b ̃ IR , we have e a ̃ = e m ( a ̃ ) , w ( a ̃ ) and e b ̃ = e m ( b ̃ ) , w ( b ̃ ) . Therefore e a ̃ . e b ̃ = e m ( a ̃ ) , w ( a ̃ ) . e m ( b ̃ ) , w ( b ̃ ) = e m ( a ̃ ) . e m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) } = e m ( a ̃ ) + m ( b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) } = e m ( a ̃ + b ̃ ) , max { w ( a ̃ ) , w ( b ̃ ) } = e a ̃ + b ̃ ( say ) . That is e a + b = e a . e b .

3

3 Interval linear algebra

Theorem 1

For all a ̃ , b ̃ IR , define a ̃ b ̃ if m ( a ̃ ) = m ( b ̃ ) . Then this relation say R is an equivalence relation on IR .

Proof: For any a IR , m a = m a . Hence a ̃ a ̃ . Thus R is Reflexive.

Let a ̃ , b ̃ IR and a ̃ b ̃ . Hence m ( a ̃ ) = m ( b ̃ ) m ( b ̃ ) = m ( a ̃ ) b ̃ a ̃ , Thus R is Symmetric.

Let a ̃ , b ̃ , c ̃ IR with a ̃ b ̃ and b ̃ c ̃ . Then m ( a ̃ ) = m ( b ̃ ) and m ( b ̃ ) = m ( c ̃ ) . Hence m ( a ̃ ) = m ( c ̃ ) which implies a ̃ c ̃ . Thus R is transitive. Hence R is an Equivalence Relation.

Note: Let E = IR R be the set of all equivalence classes of intervals under the equivalence relation R on IR and all intervals in an equivalence class will have the same mid point say m R . We denote this equivalence class by m ¯ and m ¯ will be identified with m (or) with m ̃ where m ̃ is any closed interval in m ¯ .

Definition 2

On E, we define a binary operation +, called addition as follows:

Take m 1 , m 2 E . Define m 1 + m 2 = ( m 1 + m 2 ) . We note that + is well defined on E and is indeed a binary operation on E.

Define another binary operation · on E called multiplication as follows:

For all m 1 , m 2 E , we define m 1 · m 2 = ( m 1 · m 2 ) . We note that · on LHS is a definition on E and on RHS, m 1 · m 2 denotes the usual multiplication of real numbers. · is well defined on E and turns out to be a binary operation on E. We may write m 1 · m 2 interchangeably as m 1 m 2 as in the case of multiplication of real numbers.

Definition 3

Any real number α R can be converted into an interval α ̃ IR by choosing any non-negative real number β and defining α ̃ = [ α - β , α + β ] . This non-negative real number β is said to pair with α .

In this paper, the crisp eigenvalues and crisp eigenvectors of m ( A ) are converted into interval eigenvalues and interval eigenvectors of A by judiciously choosing the pairing number β > 0 in such a way that the interval eigenvalues and normalized interval eigenvectors are unique.

Theorem 2

E , + , · is a field.

Proof: Let m 1 , m 2 , m 3 E .

(i). m 1 + m 2 = ( m 1 + m 2 ) (by definition). Sum of two real numbers m 1 , m 2 is ( m 1 + m 2 ) which is also a real number and hence ( m 1 + m 2 ) E . Hence ’addition’ in E has closure property.

(ii). Since addition in R is associative, we have ( m 1 + m 2 ) + m 3 = ( m 1 + m 2 ) + m 3 = ( m 1 + m 2 ) + m 3 = m 1 + ( m 2 + m 3 ) = m 1 + ( m 2 + m 3 ) .

(iii). 0 R and hence 0 ¯ E and m 1 + 0 ¯ = ( m 1 + 0 ) = m 1 = ( 0 + m 1 ) = 0 ¯ + m 1 . Hence 0 ¯ is the identity on E and we will call this identity as the additive identity on E.

(iv). Let m 1 E , then m 1 R and so - m 1 R which gives - m 1 E . Also m 1 + ( - m 1 ) = m 1 + ( - m 1 ) = 0 ¯ . Similarly ( - m 1 ) + m 1 = 0 ¯ . Thus we conclude that - m 1 is the additive inverse of m 1 .

(v). Let m 1 + m 2 = ( m 1 + m 2 ) = ( m 2 + m 1 ) = m 2 + m 1 and hence we have the commutativity of + on E.

(vi). Product of two real numbers is a real number and hence m 1 · m 2 = ( m 1 · m 2 ) E for all m 1 , m 2 E . Hence · has closure property on E.

(vii). Product (usual multiplication) of real numbers is associative and hence ( m 1 m 2 ) m 3 = ( m 1 m 2 ) m 3 = ( m 1 m 2 ) m 3 = m 1 ( m 2 m 3 ) = m 1 ( m 2 m 3 ) = m 1 ( m 2 m 3 ) . Hence · is associative in E.

(viii). 1 R . Therefore 1 ¯ E and 1 ¯ m 1 = 1 m 1 = m 1 = m 1 1 = m 1 1 ¯ . Therefore 1 ¯ is the identity (multiplicative identity which we will also call as unity) in E.

(ix). Let m 1 0 ¯ . Then m 1 0 . So 1 m 1 R and m 1 1 m 1 = m 1 1 m 1 = 1 = 1 m 1 m 1 . Hence all non-zero elements in E have a multiplicative inverse in E.

(x). m 1 m 2 = m 1 m 2 = m 2 m 1 = m 2 m 1 . Therefore multiplication is commutative in E.

(xi). m 1 ( m 2 + m 3 ) = m 1 ( m 2 + m 3 ) = m 1 m 2 + m 1 m 3 = m 1 m 2 + m 1 m 3 = m 1 m 2 + m 1 m 3 . Hence multiplication in E distributes over addition in E.

Thus E is a Field for the two binary operations + and · .

Theorem 3

(Frobenius-Perron) Strang (2003) Let A be an ( n × n ) matrix with non-negative entries. Then the following are true:

  1. A has a non-negative real eigenvalue. The largest such eigenvalue, λ ( A ) , dominates the absolute values of all other eigenvalues of A. The domination is strict if the entries of A are strictly positive.

  2. If A has strictly positive entries, then λ ( A ) is a simple positive eigenvalue, and the corresponding eigenvector can be normalized to have strictly positive entries.

  3. If A has an eigenvector ν with strictly positive entries, then the corresponding eigenvalue λ ν is λ ( A ) .

Theorem 4

E and R are isomorphic as fields by the natural isomorphism which associates m R to m E .

Proof: Define f : E R by f ( m ) = m , for all m E . This f is well defined on E since the midpoint of any interval in an equivalence class is the same. Also f ( m 1 + m 2 ) = f ( m 1 + m 2 ) = m 1 + m 2 ( by definition ) = f ( m 1 ) + f ( m 2 ) . Therefore f is additive on E (i.e. f preserves addition on E). Now f ( m 1 · m 2 ) = f ( m 1 m 2 ) = m 1 m 2 = f ( m 1 ) f ( m 2 ) . Hence f is multiplicative on E. That is f preserves multiplication on E.

Claim (i): f is 1 - 1 (injective).

f ( m 1 ) = f ( m 2 ) m 1 = m 2 m 1 = m 2 . Hence f is injective.

Claim (ii): f is onto (surjective).

m R m E and f ( m ) = m and hence f is surjective. Thus f is a bijective Ring homomorphism between the fields E and R which leads us to conclude that E is isomorphic to R . That is E R .

Note: This isomorphism enables us to identify an element of E (an equivalence class) with a real number which is the mid point of any interval in the equivalence class.

Using the identification between intervals and ordered pairs (i.e. a ̃ = [ a L , a U ] and m ( a ̃ ) , w ( a ̃ ) ), we get unique solutions to interval equations, eigenvalues and eigenvectors of interval matrices which in turn leads to canonical forms of interval matrices.

The field isomorphism between E and R enables us to convert interval matrices into classical matrices. After finding the eigenvalues and normalized eigenvectors for these classical matrices formed by mid points and widths of intervals, we get interval eigenvalues and normalized interval eigenvectors for the given interval matrices. This is illustrated by examples (1) and (2).

3.1

3.1 Interval matrices

A classical matrix is a rectangular array of elements of a field F . whenever impreciseness or uncertainty occurs in the entries of a matrix, we switch over to Interval matrices, where each entry of the matrix is a closed and bounded interval in R .

Definition 4

An interval matrix A is a matrix whose entries are closed and bounded intervals and is denoted by A = a ̃ 11 a ̃ 1 n a ̃ m 1 a ̃ mn = ( a ̃ ij ) , where each a ̃ ij = [ a ij L , a ij U ] .

We use E m × n to denote the set of all equivalence classes of ( m × n ) interval matrices. By m ( A ) we denote a matrix whose entries are the corresponding midpoints of the entries of A . i.e. m ( A ) = m ( a ̃ 11 ) m ( a ̃ 1 n ) m ( a ̃ m 1 ) m ( a ̃ mn ) . The width of an interval matrix A is the matrix of widths of its interval elements defined as w ( A ) = w ( a ̃ 11 ) w ( a ̃ 1 n ) w ( a ̃ m 1 ) w ( a ̃ mn ) , where each entry of w ( A ) is always non-negative. We use O to denote the null matrix 0 0 0 0 and O to denote the null interval matrix 0 ̃ 0 ̃ 0 ̃ 0 ̃ . Also we use I to denote the identity matrix 1 0 1 0 1 and I to denote the identity interval matrix 1 ̃ 0 ̃ 1 ̃ 0 ̃ 1 ̃ .

If m ( A ) = m ( B ) , then the interval matrices A and B are said to be equivalent and this equivalence is denoted by A B . In particular if m ( A ) = m ( B ) and w ( A ) = w ( B ) , then A = B . If m ( A ) = O , then we say that A is a zero interval matrix which is denoted by O . In particular if m ( A ) = O and w ( A ) = O , then A = [ 0 , 0 ] [ 0 , 0 ] [ 0 , 0 ] [ 0 , 0 ] = O . Also if m ( A ) = O and w ( A ) O , then A = 0 ̃ 0 ̃ 0 ̃ 0 ̃ O . If A O . (i.e. A is not equivalent to O ), then A is said to be a non-zero interval matrix. If m ( A ) = I then we say that A is an identify interval matrix and is denoted by I .

In particular if m ( A ) = I and w ( A ) = O , then.

A = [ 1 , 1 ] [ 0 , 0 ] [ 1 , 1 ] [ 0 , 0 ] [ 1 , 1 ] .

Remark 1

An interval vector x ̃ = ( x ̃ 1 , x ̃ 2 , x ̃ 3 , , x ̃ n ) t is a column vector whose components are closed intervals. We use IR n to denote the set of all n-dimensional interval vectors. By m ( x ̃ ) we denote a vector whose entries are the corresponding mid points of the entries of x ̃ . i.e) m ( x ̃ ) = ( m ( x ̃ 1 ) , m ( x ̃ 2 ) , m ( x ̃ 3 ) , , m ( x ̃ n ) ) t and the width of interval vector is defined by w ( x ̃ ) = ( w ( x ̃ 1 ) , w ( x ̃ 2 ) , w ( x ̃ 3 ) , , w ( x ̃ n ) ) t .

3.2

3.2 Interval matrix operations

If A , B IR m × n , x IR n , where x ̃ = ( x ̃ 1 , x ̃ 2 , , x ̃ n ) t and α ̃ IR , then ( i ) . A + B = m ( A ) + m ( B ) , max min w ( a ̃ ij ) 0 w ( A ) , min w ( b ̃ ij ) 0 w ( B ) ( ii ) . A - B = m ( A ) - m ( B ) , max min w ( a ̃ ij ) 0 w ( A ) , min w ( b ̃ ij ) 0 w ( B ) ( iii ) . A · B = m ( A ) · m ( B ) , max min w ( a ̃ ij ) 0 w ( A ) , min w ( b ̃ ij ) 0 w ( B ) ( iv ) . A ÷ B = m ( A ) ÷ m ( B ) , max min w ( a ̃ ij ) 0 w ( A ) , min w ( b ̃ ij ) 0 w ( B ) ( v ) . α ̃ A = m ( α ̃ ) · m ( A ) , max w ( α ̃ ) , min w ( a ̃ ij ) 0 w ( A )

3.3

3.3 Eigenvalues and eigenvectors of interval matrices

Let A be an interval matrix. A non-zero interval vector v ̃ satisfies A v ̃ λ ̃ v ̃ for some λ ̃ IR is said to be an interval eigenvector of A and such λ ̃ is called an interval eigenvalue of A .

Note: In the case of interval matrices A , we note that the width matrix w ( A ) contains only non-negative entries. Hence the Perron-Frobenius Theorem for such matrices guarantees the existence of a non-negative eigenvalue for the width matrix w ( A ) which in turn will give a unique eigenvalue for an interval matrix corresponding to every real eigenvalue of m ( A ) . The hallmark of this procedure is that the set of eigenvalues obtained for a given interval matrix is unique.

Theorem 5

A is an ( n × n ) interval matrix.

(a). Corresponding to every real eigenvalue m ( λ ) of m ( A ) , we get a unique interval eigenvalue for A .

(b). Corresponding to every normalized eigenvector m ( ν ) for eigenvalue m ( λ ) , we get a unique normalized interval eigenvector for A .

Proof: (a). Let m ( λ ) be a real eigenvalue of m ( A ) . Find all eigenvalues of w ( A ) . Two cases arise.

Case (i): w ( A ) has a positive eigenvalue. In this case, we take w ( λ ) = min { w ( λ ) / w ( λ ) > 0 , w ( λ ) is an eigenvalue of w ( A ) } . The pair m ( λ ) , w ( λ ) will give a unique interval eigenvalue of A namely [ m ( λ ) - w ( λ ) , m ( λ ) + w ( λ ) ] .

Case (ii): If w ( A ) does not have positive eigenvalue, the non-negativity of the entries of w ( A ) ensures the applicability of the Perron-Frobenius theorem for w ( A ) and hence the matrix w ( A ) is nilpotent. Hence all eigenvalues of w ( A ) are zero which implies that the interval eigenvalues of A is degenerate.

(b). The unique interval eigenvector corresponding to m ( λ ) is computed as follows:

Let m ( ν ) be the normalized eigenvector of m ( A ) corresponding to the eigenvalue m ( λ ) . Find the normalized eigenvector w ( ν ) corresponding to w ( λ ) given in case(i) or corresponding to 0 which arises in case(ii). Since an eigenvector is non-zero and w ( ν ) and ( - w ( ν ) ) are normalized eigenvectors for the same eigenvalue w ( λ ) . For a normalized eigenvector w ( ν ) = ( ν 1 , ν 2 , , ν n ) t of w ( λ ) , define μ = min { | ν i | / ν i 0 , i = 1 , 2 , , n } . Pairing this μ with m ( ν ) will produce the unique interval eigenvector [ m ( ν 1 ) - μ , m ( ν 1 ) + μ ] [ m ( ν 2 ) - μ , m ( ν 2 ) + μ ] [ m ( ν n ) - μ , m ( ν n ) + μ ] .

In case the geometric multiplicity of an eigenvalue w ( λ ) = k > 1 , then the k crisp normalized eigenvectors corresponding to w ( λ ) form an ( n × k ) rectangular matrix say w . In this case μ = min { | w ij | / w ij 0 , i = 1 , 2 , , n ; j = 1 , 2 , , k } . The k normalized eigenvectors corresponding to μ give a unique set of k linealy independent interval eigenvectors. Again we can ensure the interval eigenvectors to be unique with the same geometric multiplicity. Hence the theorem.

3.4

3.4 Algorithm for eigenvalues and eigenvectors of an interval matrix

Given an ( n × n ) interval matrix A .

Step (1). Find m ( A ) and w ( A ) .

Step (2). Find eigenvalues of m ( A ) and w ( A ) over R .

Step (3). Find w ( λ ) = min { w ( λ ) / w ( λ ) > 0 , w ( λ ) is an eigenvalue of w ( A ) } . By Perron-Frobenius, there exist a minimum positive eigenvalue for w ( A ) is w ( λ ) failing which w ( A ) is nilpotent.

Step (4). Corresponding to every real eigenvalue m ( λ i ) of m ( A ) ,we consider the pair m ( λ i ) , w ( λ ) ( i = 1 , 2 . , n ) to find interval eigenvalues for A . In case w ( A ) is nilpotent the crisp real eigenvalues of m ( A ) give degenerate interval eigenvalues for A .

Step (5). Find the normalized eigenvectors corresponding to every real eigenvalue m ( λ i ) of m ( A ) and the normalized eigenvector corresponding to w ( λ ) of w ( A ) . In this case, the eigenvector pairing number is μ = min { | ν i | / ν i 0 , i = 1 , 2 , , n } , where w ( ν ) = ( ν 1 , ν 2 , , ν n ) t of w ( λ ) .

Step (6). When the geometric multiplicity of an eigenvalue w λ ' = k > 1 ,then the k crisp normalized eigenvectors corresponding to w ( λ ) form an ( n × k ) rectangular matrix say w . In this case, the eigenvector pairing number is μ = min | w ij | / w ij 0 , i = 1 , 2 , , n ;   j = 1 , 2 , , k .

The k normalized eigenvectors corresponding to μ give a unique set of k linealy independent normalized interval eigenvectors, thus exhibiting the same geometric multiplicity for the crisp and interval eigenvalues.

4

4 Numerical examples

Example 1

Find the eigenvalues and eigenvectors of the interval matrix A = [ 1 , 2 ] [ 1 , 2 ] [ 1 , 3 ] [ 2 , 5 ]

Solution: Consider the midpoint matrix m ( A ) = 1.50 1.50 2 3.50 . The eigenvalues of m ( A ) are 4.50 , 0.50 and the corresponding normalized eigenvectors are 0.45 0.89 , 0.83 - 0.56 . Also consider the width matrix w ( A ) = 0.50 0.50 1 1.50 . The eigenvalues of w ( A ) are 1.87 , 0.14 . Both eigenvalues are positive, so choose minimum positive eigenvalue 0.14 and the corresponding normalized eigenvector is 0.81 - 0.59 . Here v ̃ is an eigenvector and hence - v ̃ is also an eigenvector and in order to reach absolute minimum, we consider - 0.81 0.59 as the normalized eigenvector.

Now the unique interval eigenvalues of the given interval matrix A are:

  1. 4.50 , 0.14 = [ 4.50 - 0.14 , 4.50 + 0.14 ] = [ 4.36 , 4.64 ]

  2. 0.50 , 0.14 = [ 0.50 - 0.14 , 0.50 + 0.14 ] = [ 0.36 , 0.64 ]

This is justified because we are interested in minimising vagueness or error. The interval eigenvalues of the given interval matrix A are [ 4.36 , 4.64 ] and [ 0.36 , 0.64 ] . The normalized interval eigenvectors of the given interval matrix A are computed as follows:

Case 1: The normalized interval eigenvector corresponding to the interval eigenvalue 4.50 , 0.14 is 0.45 0.89 , - 0.81 0.59 = 0.45 0.89 , 0.59 = [ 0.45 - 0.59 , 0.45 + 0.59 ] [ 0.89 - 0.59 , 0.89 + 0.59 ] = [ - 0.14 , 1.04 ] [ 0.30 , 1.48 ] , since 0.59 is the absolute minimum. Hence for the interval eigenvalue λ ̃ 1 = [ 4.36 , 4.64 ] , the corresponding normalized interval eigenvector is ν ̃ 1 = [ - 0.14 , 1.04 ] [ 0.30 , 1.48 ] .

Case 2: The normalized interval eigenvector corresponding to the interval eigenvalue 0.50 , 0.14 is 0.83 - 0.56 , - 0.81 0.59 = 0.83 - 0.56 , 0.59 = [ 0.83 - 0.59 , 0.83 + 0.59 + ] [ - 0.56 - 0.59 , - 0.56 + 0.59 ] = [ 0.24 , 1.42 ] [ - 1.15 , 0.03 ] . Hence for the interval eigenvalue λ ̃ 2 = [ 0.36 , 0.64 ] , the corresponding normalized interval eigenvector is ν ̃ 2 = [ 0.24 , 1.42 ] [ - 1.15 , 0.03 ] . From the above cases, we see that, the interval eigenvalues of A are λ ̃ 1 = [ 4.36 , 4.64 ] and λ ̃ 2 = [ 0.36 , 0.64 ] and the corresponding normalized interval eigenvectors are ν ̃ 1 = [ - 0.14 , 1.04 ] [ 0.30 , 1.48 ] and ν ̃ 2 = [ 0.24 , 1.42 ] [ - 1.15 , 0.03 ] .

Example 2

Find the eigenvalues and eigenvectors of the symmetric interval matrix A = [ 1 , 1 ] [ - 4 , 1 ] [ - 4 , 1 ] [ 1 , 1 ] .

Solution: Consider midpoint matrix m ( A ) = 1 - 1.50 - 1.50 1 . The eigenvalues of m ( A ) are 2.50 , - 0.50 and the corresponding normalized eigenvectors are 0.71 - 0.71 , 0.71 0.71 . Also consider width matrix w ( A ) = 0 2.50 2.50 0 . The eigenvalues of w ( A ) are 2.50 , - 2.50 . Here choose positive eigenvalue 2.50 and the corresponding normalized eigenvector is 0.71 0.71 .

Now the unique interval eigenvalues of the given interval matrix A are:

  1. 2.50 , 2.50 = [ 2.50 - 2.50 , 2.50 + 2.50 ] = [ 0 , 5 ]

  2. - 0.50 , 2.50 = [ - 0.50 - 2.50 , - 0.50 + 2.50 ] = [ - 3 , 2 ]

Therefore the interval eigenvalues of A are λ 1 = [ 0 , 5 ] and λ 2 = [ - 3 , 2 ] .

The normalized interval eigenvectors of the given interval matrix A are computed as follows:

Case 1: The normalized interval eigenvector corresponding to the interval eigenvalue 2.50 , 2.50 is 0.71 - 0.71 , 0.71 0.71 = 0.71 - 0.71 , 0.71 = [ 0.71 - 0.71 , 0.71 + 0.71 ] [ - 0.71 - 0.71 , - 0.71 + 0.71 ] = [ 0 , 1.42 ] [ - 1.42 , 0 ] , since 0.71 is the absolute minimum. For the eigenvalue λ ̃ 1 = [ 0 , 5 ] , the corresponding normalized eigenvector is ν ̃ 1 = [ 0 , 1.42 ] [ - 1.42 , 0 ] .

Case 2: The normalized interval eigenvector corresponding to the interval eigenvalue of - 0.50 , 2.50 is 0.71 0.71 , 0.71 0.71 = 0.71 0.71 , 0.71 = [ 0.71 - 0.71 , 0.71 + 0.71 ] [ 0.71 - 0.71 , 0.71 + 0.71 ] = [ 0 , 1.42 ] [ 0 , 1.42 ] For the interval eigenvalue λ ̃ 2 =[-0.50,2.50], the corresponding normalized interval eigenvector is ν ̃ 2 = [ 0 , 1.42 ] [ 0 , 1.42 ] . Therefore the interval eigenvalues of A are λ ̃ 1 = [ 0 , 5 ] , λ ̃ 2 = [ - 3 , 2 ] and the corresponding normalized interval eigenvectors are ν 1 = [ 0 , 1.42 ] [ - 1.42 , 0 ] , ν 2 = [ 0 , 1.42 ] [ 0 , 1.42 ] .

5

5 An application on electric circuits

Systems of differential equations can be used to model a variety of physical systems, such as predator–prey interactions, vibration of mechanical systems, electrical circuits etc. In real-life problems, the systems are usually complex and have to be modelled as multiple degrees-of-freedom systems, resulting in systems of linear ordinary differential equations. In real life phenomenon, some or all the decision parameters of the problems may not be known precisely and it may frequently be treated as suitable intervals.

Consider the following electric circuit given in Fig. 1 with C = [ - 0.333 , 1.667 ] , R 1 ̃ = [ - 1 , 3 ] , R 2 ̃ = [ - 0.400 , 1.600 ] and L = [ 0 , 4 ] . (see Figs. 2 and 3).

Electric circuit.
Fig. 1
Electric circuit.
Current I ∼ ( t ) for t = 0 to 5 hours.
Fig. 2
Current I ( t ) for t = 0 to 5 hours.
Voltage V ∼ ( t ) for t = 0 to 5 hours.
Fig. 3
Voltage V ( t ) for t = 0 to 5 hours.

The given system is modeled as a system of interval linear differential equations as

(1)
d I dt = [ - 2.500 , 1.500 ] I + [ - 1.500 , 2.500 ] V d V dt = [ - 2.500 , - 0.500 ] I + [ - 3.500 , - 1.500 ] V ( i . e ) d dt I V = [ - 2.500 , 1.500 ] [ - 1.500 , 2.500 ] [ - 2.500 , - 0.500 ] [ - 3.500 , - 1.500 ] I V We solve this system using the concept of interval eigenvalues and interval eigenvectors, for which the interval eigenvalues and interval eigenvectors are computed using the proposed pairing techniques.

The interval matrix A corresponds to the given system of interval linear differential equations (1) is A = [ - 2.500 , 1.500 ] [ - 1.500 , 2.500 ] [ - 2.500 , - 0.500 ] [ - 3.500 , - 1.500 ] .

By applying the proposed pairing techniques, the interval eigenvalues of A are λ ̃ 1 = [ - 4 , 2 ] and λ ̃ 2 = [ - 5 , 1 ] and the corresponding normalized interval eigenvectors are ν ̃ 1 = [ 0.260 , 1.154 ] [ - 1.154 , - 0.260 ] and ν ̃ 2 = [ - 0.763 , 0.131 ] [ 0.502 , 1.396 ] respectively.

The solution of the given system of interval linear differential equations (1) is

(2)
I ( t ) V ( t ) = c ̃ 1 e [ - 4 , 2 ] t [ 0.260 , 1.154 ] [ - 1.154 , - 0.260 ] + c ̃ 2 e [ - 5 , 1 ] t [ - 0.763 , 0.131 ] [ 0.502 , 1.396 ] .

Suppose that initially the voltage across C and loop current through L are approximately 0 ̃ and 2 ̃ respectively, we have the initial conditions V ( 0 ) = [ 1.750 , 2.250 ] and I ( 0 ) = [ 0 , 0 ] . Applying these initial conditions in equation (2), we get

(3)
[ 0.260 , 1.154 ] c ̃ 1 + [ - 0.763 , 0.131 ] c ̃ 2 = [ 0 , 0 ] [ - 1.154 , - 0.260 ] c ̃ 1 + [ 0.502 , 1.396 ] c ̃ 2 = [ 1.750 , 2.250 ] . Solving equation (3) by applying Gauss elimination method and the proposed pairing technique, we get c ̃ 1 = [ 0.967 , 1.861 ] and c ̃ 2 = [ 2.715 , 3.609 ] . The current and the voltage at any time t is given by
(4)
I ( t ) = [ 0.553 , 1.447 ] e [ - 4 , 2 ] t + [ - 1.447 , - 0.553 ] e [ - 5 , 1 ] t V ( t ) = [ - 1.447 , - 0.553 ] e [ - 4 , 2 ] t + [ 2.553 , 3.447 ] e [ - 5 , 1 ] t .
After simplification, we get I ( t ) = e - t - e - 2 t , 3 = [ e - t - e - 2 t - 3 , e - t - e - 2 t + 3 ] V ( t ) = - e - t + 3 e - 2 t , 3 = [ - e - t + 3 e - 2 t - 3 , - e - t + 3 e - 2 t + 3 ] .

The above example refers to a system of interval linear differential equations arrived at based on the premise that the entries are uncertain which explains the system of interval linear differential equations which is solved.

With this choice of initial conditions V ( 0 ) = [ 1.750 , 2.250 ] and I ( 0 ) = [ 0 , 0 ] , our solution shows that as time progresses the capacitor discharges exponentially quickly through the circuit. In case the entries are precisely known, then the above solution automatically transforms to the solution of the crisp system of linear differential equations if we replace intervals in the solution by their corresponding mid points.

6

6 Conclusion

On the collection IR of all closed intervals in R , a relation is defined which turns out to be an equivalence relation on IR . The collection of all equivalence classes under the defined equivalence relation is a field E which is isomorphic to R . This field E in turn induces a vector space E n over E which is a finite dimensional vector space over E for n = 1 , 2 , . A whole gamut of ideas which are available in classical linear algebra can now be applied to E n over E including canonicalization of interval matrices to Diagonal form, Jordan form and Rational form. We can extend this study to Bilinear and Quadratic forms also and investigate some interesting applications of these forms.

In E n over E, we are able to obtain unique eigenvalues and normalized eigenvectors of interval matrices as in the classical case. We have provided the methodology for obtaining the eigenvalues and normalized eigenvectors of interval matrices with two suitable examples. The distinct advantage of the Field and Vector space structure introduced in this paper guarantees the applicability of celebrated theorems in classical linear algebra to interval linear algebra. By applying the proposed pairing techniques, we also discussed a real life example on electric circuit theory under uncertain environment which is modelled as a system of interval linear differential equations. Numerical examples are provided to illustrate the theory developed in this paper.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. , , . Introduction to Interval Computations (first ed.). New York: Academic Press; .
  2. , . The Interval Eigenvalue Problem. ZAMM J. Appl. Mathe. Mech.. 1991;71(1):61-64.
    [CrossRef] [Google Scholar]
  3. , . Singular values of an interval matrix. Linear Algebr. Appl.. 1991;151:125-133.
    [Google Scholar]
  4. , , . On Arithmetic Operations of Interval Numbers. Int. J. Uncertainty, Fuzziness Knowledge Based Syst.. 2005;13(6):619-631.
    [CrossRef] [Google Scholar]
  5. , . On some properties of interval matrices. Int. J. Comput. Mathe. Sci.. 2007;1(1):35-42.
    [Google Scholar]
  6. , , , . Strongly universal and weak interval eigenvectors in max-plus algebra. Mathematics. 2020;8(8):1348.
    [CrossRef] [Google Scholar]
  7. , , . Interval arithmetic in matrix computations. Part 2, SIAM. J. Num. Anal.. 1967;4(1):1-9.
    [Google Scholar]
  8. , . On the solution of linear algebraic equations with interval coefficients. Linear Algebra Appl.. 1969;2:153-165.
    [Google Scholar]
  9. , . Bounding the solution of interval linear Equations. SIAM. J. Num. Anal.. 1992;29(5):1493-1503.
    [Google Scholar]
  10. , , . Global Optimization Using Interval Analysis (second ed.). New York: Marcel Dekker; .
  11. , . The extreme eigenvalues and stability of real symmetric interval matrices. IEEE Trans. Automat. Control. 1992;37(4):532-535.
    [Google Scholar]
  12. , , , , . Applied Interval Analysis (first ed.). London: Springer-Verlag; .
  13. , . Outer interval solution of the eigenvalue problem under general form parametric dependencies. Reliab. Comput.. 2006;12(2):121-140.
    [Google Scholar]
  14. , , . Computing eigenvalue bounds of structures with uncertain-but-non-random parameters by a method based on perturbation theory. Comm. Numer. Methods Engrg.. 2007;23(11):973-982.
    [Google Scholar]
  15. , , . An interval-matrix branch and-bound algorithm for bounding eigenvalues. Optim. Methods Softw.. 2017;32(4):872-891.
    [CrossRef] [Google Scholar]
  16. , . Interval Methods for Systems of Equations (first ed.). Cambridge: Cambridge University Press; .
  17. , . Linear algebra in the vector space of intervals. arXiv math. 2010;1:1-11. doi:1006.5311
    [Google Scholar]
  18. , , , . Bounds of eigenvalues for structures with an interval description of uncertain-but-non-random parameters. Chaos Solitons Fractals.. 1996;7(3):425-434.
    [Google Scholar]
  19. , , , . An approximate method for the standard interval eigenvalue problem of real non-symmetric interval matrices. Comm. Numer. Methods Engrg.. 2001;17(4):239-251.
    [Google Scholar]
  20. , , . Solution theorems for the standard eigenvalue problem of structures with uncertain-but-bounded parameters. J. Sound Vib.. 2005;282(1–2):381-399.
    [Google Scholar]
  21. , , , . Eigenvalue bounds of structures with uncertain-but-bounded parameters. J. Sound Vib.. 2005;282:297-312.
    [Google Scholar]
  22. , , . On eigenvector bounds. BIT Num. Mathe.. 2004;43:823-837.
    [Google Scholar]
  23. , , . Theory and Methodology: On comparing interval numbers. Eur. J. Oper. Res.. 2000;127(1):28-43.
    [CrossRef] [Google Scholar]
  24. , . eigenvalues of Interval Matrix. Am. J. Appl. Math. Comput.. 2021;1(4):6-11.
    [CrossRef] [Google Scholar]
  25. , . Introduction to Linear Algebra (third ed.). Wellesley-Cambridge Press; .
  26. , , . Eigenvalues bounds for symmetric interval matrices. Int. J. Comput. Sci. Mathe.. 2015;6(4):311-322.
    [CrossRef] [Google Scholar]
  27. , . Theory of an interval algebra and its application to numerical analysis. Japan J. Indust. Appl. Math.. 2009;26:125-143.
    [Google Scholar]
  28. , . Rigorous error bounds for computed eigensystem. Comput. J.. 1961;4:230-234.
    [Google Scholar]
  29. , . A simple method for error bounds of eigenvalues of symmetric matrices. Linear Algebra Appl.. 2001;324:227-234.
    [Google Scholar]
  30. , , , . An evolution strategy method for computing eigenvalue bounds of interval matrices. Appl. Math. Comput.. 2008;196(1):257-265.
    [Google Scholar]
Show Sections