7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Editorial
Invited review
Letter to the Editor
Original Article
REVIEW
Review Article
SHORT COMMUNICATION
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Editorial
Invited review
Letter to the Editor
Original Article
REVIEW
Review Article
SHORT COMMUNICATION
View/Download PDF

Translate this page into:

Original Article
23 (
4
); 331-335
doi:
10.1016/j.jksus.2010.06.013

β-Approach to near set theory

Department of Mathematics, Faculty of Science, Tanta University, Egypt

*Corresponding author via_marei@yahoo.com (E.A. Marei)

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Available online 26 February 2011

Peer review under responsibility of King Saud University.

Abstract

The aim of this paper is to introduce two approaches to near sets by using topological structures and β-open sets. Some fundamental properties and characterizations are given. We obtain a comparison between these types of approximations and the approximation introduced by J.F. Peters.

Keywords

Rough set
Near set
Lower coverage
Topological space
Generalized approximation space
1

1 Introduction

Rough set theory (Pawlak, 1981), proposed by Pawlak in 1981, is a generalization of the classical set theory for describing and modelling of vagueness. It has recently been received wide attention on the research areas in both of the real-life applications and the theories themselves to deal with inexact, uncertain or vague knowledge. The rough set theory believes that knowledge is essentially a kind of capability of classification; such capability exists not only in human beings but also in other species. The capability of classification incarnates the knowledge who owns. In the Pawlak model (U, R), the equivalence relation R in the equation characterizes the classification to universe U. We can express the concepts of universe if we have such knowledge. When the concepts can be presented accurately by the knowledge in knowledge base, they are called accurate concepts or accurate sets, or they are called rough concepts.

Near set theory introduced by J.F. Peter is like a generalization of rough set theory. In this theory Peter depends on the features of objects to define the nearness of objects (Peters, 2008a), consequently the classification of our universal set with respect to the available information of the objects.

2

2 Basic concepts

This section covers some fundamental concepts in rough sets and near sets.

The rough set approach introduced by Pawlak provides a ground for concluding to what degree a set of design models representing a standard are a part of a set of candidate design models. In this section, we briefly consider several fundamental concepts in rough set theory, namely, set approximation and attribute reduction. For computational reasons, a syntactic representation of knowledge is provided by rough sets in the form of data tables. Informally, a data table is represented as a collection of rows; each is labeled with some form of input and each column is labeled with the name of an attribute that computes a value using the row input. Formally, a data (information) table IS is represented by a pair (U, A), where U is a non-empty, finite set of objects and A is a non-empty, finite set of attributes, where a : U → Va for every a ∈ A. For each B ⊆ A, there is an associated equivalence relation IndIS(B) such that IndIS(B) = {(x, x′) ∈ U2 : a(x) = a(x′) ∀a ∈ B}. If (x, x′) ∈ IndIS(B), we say that objects x and x′ are indiscernible from each other relative to attributes from B. The notation [x]B denotes a block of B-indiscernible objects in the partition of U containing x. For X ⊆ U, the set X can be approximated only from information contained in B by constructing a B-lower and B-upper approximation denoted by B(X) and B ¯ ( X ) , respectively, where B(X) = {x ∈ U: [x]B ⊆ X} and B ¯ ( X ) = { x U : [ x ] B X ϕ } . A lower approximation B(X) of a set X is a collection of objects that can be classified with full certainty as members of X using the knowledge represented by attributes in B. By contrast, an upper approximation B ¯ ( X ) of a set X is a collection of objects representing both certain and possible uncertain knowledge about X. Whenever B _ ( X ) = B ¯ ( X ) , the collection of objects can be classified perfectly, and forms what is known as a crisp set. In the case B(X) is a proper subset of B ¯ ( X ) , then the set X is considered rough (inexact) relative to B. Some of rough concepts are introduced in Pawlak and Skowron (2007a,b,c) and Pawlak (2004).

The near set approach is introduced by J.F. Peters. Underlying the study of near sets is an interest in classifying sample objects by means of probe functions associated with object features. More recently, the term feature is defined as the make, form, fashion or shape (of an object). Let F denotes a set of features for objects in a set X, for any feature a ∈ F, Peter associates a function fa that maps X to some set V f a , the value of fa(x) is a measurement associated with feature a of an object x ∈ X. The function fa is called a probe function (Pawlak and Skowron, 1994). Peters defined the following concepts in Peters and Henry (2009), Peters (2008b), Peters and Ramanna (2007), Peters et al. (2007) and Peters (2007a).

Any generalized approximation space (GAS) is a tuple GAS = (U, F, Nr, νB), where U is a universe of objects, F is a set of functions representing object features, Nr is a neighbourhood family function and νB is a lower rough coverage

The equivalence class containing x with respect to the probe functions Br, where ∣r∣ is the number of considered features, and is defined as [ x ] B r = { x U : f ( x ) = f ( x ) f B r } . Then a family of neighbourhoods Nr(F) is N r ( F ) = B r P r ( F ) [ x ] B r , where Pr(F) = {Br ⊆ F : ∣Br∣ = r, 1 ⩽ r ⩽ ∣F∣}.

Information about a sample X ⊆ U can be approximated from information contained in B by constructing an Nr(B)-lower approximation N r ( B ) * X = x : [ x ] B r X [ x ] B r And an Nr(B)-upper approximation N r ( B ) * X = x : [ x ] B r X ϕ [ x ] B r . Then Nr(B)*X ⊆ Nr(B)*X and the boundary region BND N r ( B ) X between upper and lower approximations of a set X is defined as BND N r ( B ) X = N r ( B ) * X - N r ( B ) * X .

The lower rough coverage defined by ν B : P ( U ) × P ( U ) [ 0 , 1 ] , ν i ( B i ( x ) , N r ( B ) * X ) = | B i ( x ) N r ( B ) * X | | N r ( B ) * X | , N r ( B ) * X ϕ , where νi(Bi(x), Nr(B)*X) is equal to 1, if Nr(B)*X = ϕ.

In Peters (2007a,b), Peters introduced the following meanings:

An element x is near to an element y if ∃f ∈ F such that f(x) = f(y).

A set X is near to a set Y if ∃x ∈ X,y ∈ Y such that x is near to y.

A set X is termed a near set relative to a chosen family of neighbourhoods Nr(B) if and only if | BND N r ( B ) X | 0 .

3

3 Generalization of near set theory

In this section we use a general relation, hence we introduce a new approach, consequently we obtain a new general near lower (upper) approximation for any near set. Also we introduce a modification of some concepts.

Definition 3.1

Let ϕi ∈ B be general relations, where 1 ⩽ i ⩽ ∣B∣ defined on a nonempty set X, then we can introduce a general neighbourhood of an element x ∈ X as ( x ) ϕ i r = { y X : | ϕ i ( y ) - ϕ i ( x ) | r } , where ∣*∣ is the absolute value of * and r is the length of this neighbourhood.

Definition 3.2

Let B ⊆ F be a set of functions representing features of x, x′ ∈ X. Objects x and x′ are minimally near each other if ∃ϕi ∈ B s.t x ( x ) ϕ i r .

Definition 3.3

Let Y, Y′ ⊆ X and B ⊆ F. Set Y is near to Y′ if ∃x ∈ Y, x′ ∈ Y′ such that x is near to x

Theorem 3.1

Any subset of X is near to X.

Proof

From Definitions 3.2, 3.3, we get the proof obviously. □

Remar 3.1

Every set X is called near set (near to itself) as every element x ∈ X is near to itself.

Definition 3.4

Let ( X , τ ϕ i ) be a topological spaces, where ϕi ∈ B, 1 ⩽ i ⩽ ∣B∣. Hence we can define new near lower and upper approximations for any subset A ⊆ X with respect to one probe function ϕi as N _ 1 ( A ) = N 1 ( B ) , G A } , N ¯ 1 ( A ) = { F : F [ N 1 ( B ) ] c , A F } , where N 1 ( B ) = G : G ϕ i B τ ϕ i and τ ϕ i is the topology generated from the family of general neighbourhoods with respect to the probe function ϕi ∈ B.

Remar 3.2

The new near lower and upper approximations with respect to two features of a probe functions are defined as N _ 2 ( A ) = { G : G N 2 ( B ) , G A } , N ¯ 2 ( A ) = { F : F [ N 2 ( B ) ] c , A F } , where N 2 ( B ) = { G : G ϕ i , ϕ j B τ ϕ i ϕ j , i j } and τ ϕ i ϕ j is the topology generated from the family of general neighbourhoods with respect to two features. Consequently, N _ | B ( A ) = { G : G N | B | ( B ) , G A } , N ¯ | B | ( A ) = { F : F [ N | B | ( B ) ] c , A F } , where N | B | ( B ) = { G : G ϕ 1 , , ϕ | B | B τ ϕ 1 ϕ | B | } and τ ϕ 1 , ϕ | B | is the topology generated from the family of general neighbourhoods with respect to all probe functions.

Definition 3.5

Let ( X , τ ϕ i ) be topological spaces, where 1 ⩽ i ⩽ ∣B∣. The accuracy measure of any subset A ⊆ X with respect to the probe functions ϕi,i = 1, 2, … , ∣B∣ is defined as α i ( A ) = N _ i ( A ) N ¯ i ( A ) , A ϕ .

Remar 3.3

0 α i ( A ) 1 that measures the degree of exactness of any subset A ⊆ X, if α i ( A ) = 1 , then A is exact set with respect to ∣i∣ features.

Definition 3.6

Let ( X , τ ϕ i ) be topological spaces, where ϕi ∈ B. The generalized lower rough coverage of any subset Yof the family of neighbourhoods with respect to B is defined as ν i ( Y , N _ i ( D ) ) = Y N _ i ( D ) N _ i ( D ) , where D is the decision class, means the acceptable objects (Peters, 2007a), N _ i ( D ) ϕ . If N _ i ( D ) = ϕ , then ν i ( Y , N _ i ( D ) ) = 1 .

Remar 3.4

0 ν i 1 , it is used to measure the degree that the subset Y covers the sure region N _ i ( D ) .

4

4 β-Approach to near set theory

In this section we introduce a new approach to near sets by using β-open sets. Also we obtain another β-modification of some concepts.

Definition 4.1

A subset A of a topological space (X, τ) is called β-open (Abd El-Monsef et al., 1983) if A cl ( int ( cl ( A ) ) ) . The set of all β-open sets defined by βO(X).

Definition 4.2

Let ( X , τ ϕ i ) be topological spaces, where ϕi ∈ B, 1 ⩽ i ⩽ ∣B∣. The new β-near lower and upper approximations for any subset A ⊆ X with respect to one feature of the probe functions B are defined as N _ β 1 ( A ) = β 1 ( B ) , G A } , N ¯ β 1 ( A ) = { F : F [ N β 1 ( B ) ] c , A F } , where N β 1 ( B ) = { G : G i = 1 , 2 , , | B | β i O ( X ) } and βiO(X) is the family of beta open sets with respect to the topology τ ϕ i . Hence the boundary region of A with respect to one feature is defined as b N β 1 N ¯ β 1 ( A ) - N _ β 1 ( A ) .

Remar 4.1

The new β-near lower and upper approximations with respect to two features take the form N _ β 2 ( A ) = { G : G N β 2 ( B ) , G A } , N ¯ β 2 ( A ) = { F : F [ N β 2 ( B ) ] c , A F } , where N β 2 ( B ) = { G : G i , j = 1 , 2 , , | B | β i , j O ( X ) , i j } and βi,jO(X) is the family of beta open sets with respect to the topology τ ϕ i ϕ j . Consequently, N _ β | B | ( A ) = { G : G N β | B | ( B ) , G A } , N ¯ β | B | ( A ) = { F : F [ N β | B | ( B ) ] c , A F } , where N β | B | ( B ) = { G : G β 1 , 2 , , | B | O ( X ) } and β1,2,…,∣BO(X) is the family of beta open sets with respect to the topology τ ϕ 1 ϕ 2 ϕ | B | .

Definition 4.3

Let ( X , τ ϕ i ) be topological spaces, where ϕi ∈ B, 1 ⩽ i ⩽ ∣B∣, hence we can define the β-near accuracy measure of any subset A ⊆ X with respect to i features of the probe functions B as α N β i ( A ) = | N _ β i ( A ) | | N ¯ β i ( A ) | , A ϕ .

Remar 4.2

0 α N β i ( A ) 1 , it means the degree of exactness of any subset A ⊆ X. If α N β i ( A ) = 1 , then A is N β i -exact set with respect to ifeatures.

Theorem 4.1

For any subset A ⊆ X, N _ β i ( A ) is near to N ¯ β i ( A ) , where 1 ⩽ i ⩽ ∣B.

Proof

From Definition 4.2 and Remark 4.1, we can deduce that N _ β i ( A ) N ¯ β i ( A ) . Hence from Theorem 3.1, we get the proof. □

Remar 4.3

For any subset A ⊆ X, b N β i ( A ) is near to N ¯ β i ( A ) , where 1 ⩽ i ⩽ ∣B∣.

Remar 4.4

A set A with a boundary | b N β i ( A ) | 0 , is a near set.

Theorem 4.2

Every rough set is a near set but not every near set is a rough set.

Proof

There are two cases to consider

  1. | b N β i ( A ) | > 0 . Given a set A ⊆ X that has been approximated with a nonempty boundary, this means A is a rough set as well as a near set.

  2. | b N β i ( A ) | = 0 . Given a set A ⊆ X that has been approximated with an empty boundary, this means A is a near set but not a rough set. □

Definition 4.4

Let ( X , τ ϕ i ) be topological spaces, where ϕi  ∈ B,1 ⩽ i ⩽ ∣B∣. The new generalized lower rough coverage of any subset Y ⊆ Xwith respect to the sure region of the decision class D is defined as ν β i ( Y , N _ β i ( D ) ) = | Y N _ β i ( D ) t | | N _ β i ( D ) | , N _ β i ( D ) ϕ .

Remar 4.5

0 ν N β i ( Y , N _ β i ( D ) ) 1 , it is used to measure the degree that the subset Y covers the acceptable objects. If N _ β i ( D ) = ϕ , then ν N β i ( Y , N _ β i ( D ) ) = 1

Example 4.1

Let s, a, r be three features defined on a nonempty set X = {x1, x2, x3, x4} as in Table 1.

If the length of the neighbourhood of the feature s (resp a and r) equals to 0.2 (resp 0.9 and 0.5), then N 1 ( B ) = { ξ ( s 0.2 ) , ξ ( a 0.9 ) , ξ ( r 0.5 ) } , where ξ ( s 0.2 ) = { { x 1 , x 2 } , { x 1 , x 2 , x 3 } , { x 2 , x 3 , x 4 } , { x 3 , x 4 } } , ξ ( a 0.9 ) = { { x 1 , x 4 } , { x 2 , x 3 } , { x 2 , x 3 , x 4 } , { x 1 , x 3 , x 4 } } , ξ ( r 0.5 ) = { { x 1 , x 3 , x 4 } , { x 2 } } . So τ s 0.2 = { { x 2 } , { x 3 } , { x 1 , x 2 } , { x 2 , x 3 } , { x 3 , x 4 } , { x 1 , x 2 , x 3 } , { x 2 , x 3 , x 4 } , X , ϕ } , τ a 0.9 = { { x 3 } , { x 4 } , { x 3 , x 4 } , { x 2 , x 3 } , { x 1 , x 4 } , { x 2 , x 3 , x 4 } , { x 1 , x 3 , x 4 } , X , ϕ } , τ r 0.5 = { { x 2 } , { x 1 , x 3 , x 4 } , X , ϕ } . Hence N 1 ( B ) = { { x 2 } , { x 3 } , { x 4 } , { x 1 , x 2 } , { x 1 , x 4 } , { x 2 , x 3 } , { x 3 , x 4 } , { x 1 , x 2 , x 3 } , { x 1 , x 3 , x 4 } , { x 2 , x 3 , x 4 } , X , ϕ } . Also, we can get N 2 ( B ) = { ξ ( s 0.2 , a 0.9 ) , ξ ( S 0.2 , r 0.5 ) , ξ ( a 0.9 , r 0.5 ) } , where ξ ( s 0.2 , a 0.9 ) = { { x 1 } , { x 2 , x 3 } , { x 2 , x 3 , x 4 } , { x 3 , x 4 } } , ξ ( s 0.2 , r 0.5 ) = { { x 1 } , { x 2 } , { x 3 , x 4 } } , ξ ( a 0.9 , r 0.5 ) = { { x 1 , x 4 } , { x 2 } , { x 3 , x 4 } , { x 1 , x 3 , x 4 } } . So τ s 0.2 a 0.9 = { { x 1 } , { x 3 } , { x 2 , x 3 } , { x 3 , x 4 } , { x 1 , x 3 } , { x 2 , x 3 , x 4 } , { x 1 , x 2 , x 3 } , { x 1 , x 3 , x 4 } , X , ϕ } , τ s 0.2 r 0.5 = { { x 1 } , { x 2 } , { x 3 , x 4 } , { x 1 , x 2 } , { x 1 , x 3 , x 4 } , { x 2 , x 3 , x 4 } , X , ϕ } , τ a 0.9 r 0.5 = { { x 2 } , { x 4 } , { x 1 , x 4 } , { x 3 , x 4 } , { x 2 , x 4 } , { x 1 , x 3 , x 4 } , { x 1 , x 2 , x 4 } , { x 2 , x 3 , x 4 } , X , ϕ } . So N 2 ( B ) = N 3 ( B ) = { { x 1 } , { x 2 } , { x 3 } , { x 4 } , , X , ϕ } . Consequently τ S 0.2 r 0.5 τ S 0.2 r 0.5 a 0.9 . That means the reduct of these features is {s, r}, so the feature {a} can be cancelled.

For β-approach to near sets, we get N β 1 ( B ) = N β 2 ( B ) = N β 3 ( B ) = { { x 1 } , { x 2 } , { x 3 } , { x 4 } , , X , ϕ } . The following example deduces a comparison between the classical and two new approaches by using the accuracy measure.

Example 4.2

From Example 4.1, we can introduce Table 2, where Q(X) is a subset of X.

From Table 2, we can note that the classical approximations of near sets is more strong than the classical approximations of rough sets, but when we use the first generalized approach to near sets, we find that many sets will be completely exact.

Also, we find that all sets become N β 1 -exact. It means all sets of this example become exact with respect to only one feature when we use β-near approach to near sets.

So our β-near approach is the best of our study. Hence we can consider that, our approximations is the start point to apply of our life applications in many fields of science.

Table 1 The values of the studied features.
s a r
x1 0.51 1.2 0.53
x2 0.56 3.1 2.35
x3 0.72 2.8 0.72
x4 0.77 1.9 0.95
Table 2 Comparison between the traditional and modificated approximations.
Q(X) α1 α2 α3 α 1 α 2 = α 3 α N β 1 = α N β 2 = α N β 3
{x1} 0 1 3 1 0 1 1
{x2} 1 4 1 3 1 1 1 1
{x3} 0 0 0 1 1 1
{x4} 0 0 0 1 1 1
{x1, x2} 1 2 1 2 1 1 1 1
{x1, x3} 0 1 4 1 3 1 2 1 1
{x1, x4} 1 2 1 2 1 3 1 1 1
{x2, x3} 1 2 1 2 1 3 1 1 1
{x2, x4} 1 4 1 4 1 3 2 3 1 1
{x3, x4} 1 2 1 2 1 1 1 1
{x1, x2, x3} 3 4 3 4 1 2 1 1 1
{x1, x2, x4} 3 4 3 4 1 2 1 1 1
{x1, x3, x4} 3 4 3 4 1 1 1 1
{x2, x3, x4} 3 4 3 4 1 3 4 1 1

5

5 Real life application

If we consider that B = {a, s, r} in Example 4.1, represent measurements for a kind of diseases and the objects X = {x1, x2, x3, x4} be patients, then

For any group of patients, we can determine the degree of this disease, by using the lower near coverage based on the decision class D. As in the following example.

Example 5.1

In Example 4.1, if the decision class is D = {x1, x3} and we consider the following groups of the patients: {x1, x3}, {x2, x3}, {x3, x4}, {x1, x2, x3}, and {x2, x3, x4}.

Then, we get the following results:

N 1 ( B ) * D = ϕ , N 2 ( B ) * D = N 3 ( B ) * D = { x 1 } , N _ 1 ( D ) = { x 3 } , N _ 2 ( D ) = N _ 3 ( D ) = N _ β 1 ( D ) = N _ β 2 ( D ) = N _ β 3 ( D ) = { x 1 , x 3 } . So these sets cover the acceptable objects by the degrees introduced in Table 3, where I = ν N 2 = ν N 3 = ν N β 1 = ν N β 2 = ν N β 3 and Q(X) is a family of subsets of X.

Remar 5.1

If we want to determine the degree that, the lower covers these sets then we use the following formulas: ν i * ( B i ( x ) , N r ( B ) * D ) = B i ( x ) N r ( B ) * D | B i ( x ) | , B i ( x ) ϕ , ν N i * ( Y , N _ i ( D ) ) = | Y N _ i ( D ) | | Y | , Y ϕ , ν β N i * ( Y , N _ β i ( D ) ) = | Y N _ β i ( D ) | | Y | , Y ϕ .

Example 5.2

In Example 5.1, if we interest in the degree that the acceptable objects (sure region) cover these groups, then we get Table 4, where II = ν N 2 * = ν β N 1 * = ν β N 2 * = ν β N 3 * and Q(X) is a family of subsets of X.

Table 3 The degrees that some subsets of X cover the sure region.
Q(x) ν1 ν2 ν3 ν N 1 I
{x1, x3} 1 1 1 1 1
{x2, x3} 1 0 0 1 1 2
{x3, x4} 1 0 0 1 1 2
{x1, x2, x3} 1 1 1 1 1
{x2, x3, x4} 1 0 0 1 1 2
Table 4 The degree that the sure region covers some subsets of X.
Q(X) ν 1 * ν 2 * ν 3 * ν β N 1 * II
{x1, x3} 0 1 2 1 2 1 2 1
{x2, x3} 0 0 0 1 2 1 2
{x3, x4} 0 0 0 1 2 1 2
{x1, x2, x3} 0 1 3 1 3 1 3 2 3
{x2, x3, x4} 0 0 0 1 3 1 3

From this Table 4, we can say that our two generalized approaches are better than the classical approach to near set theory. As the these approximations are increasing the acceptable objects or sure region.

6

6 Conclusion

This research aims to improve lower and upper approximations of any near set by using a general topology and β-open sets. Consequently, we introduce a modification of some concepts.

By using these new approximations the boundary region of any near set is decreased. So this research is considered a starting point of many works in the real life applications.

References

  1. , , , . β-Open sets and β-continuous mappings. Bull. Fac. Assiut Uni.. 1983;12(1):77-90.
    [Google Scholar]
  2. , . Rough sets. Int. J. Comput. Inform. Sci.. 1981;11:341-356.
    [Google Scholar]
  3. , . Some issues on rough sets. Trans. Rough Sets. 2004;21:1-58.
    [Google Scholar]
  4. , , . Rough membership functions. In: , , , eds. Advances in the Dempster–Shafer Theory of Evidence. New York, NY: John Wiley & Sons; . p. :251-271.
    [Google Scholar]
  5. , , . Rudiments of rough sets. Inform. Sci.. 2007;177:3-27.
    [Google Scholar]
  6. , , . Rough sets: some extensions. Inform. Sci.. 2007;177:28-40.
    [Google Scholar]
  7. , , . Rough sets and Boolean reasoning. Inform. Sci.. 2007;177:41-73.
    [Google Scholar]
  8. , . Near sets, general theory about nearness of objects. Appl. Math. Sci.. 2007;1(53–56):2609-2629.
    [Google Scholar]
  9. , . Near sets, special theory about nearness of objects. Fundam. Inform.. 2007;75(14):407-433.
    [Google Scholar]
  10. , . Classification of perceptual objects by means of features. Int. J. Info. Technol. Intell. Comput.. 2008;3(2):1-35.
    [Google Scholar]
  11. , . Tolerance near sets and image correspondence. Int. J. Bio-Inspired Comput.. 2008;1(4):239-245.
    [Google Scholar]
  12. , , . Reinforcement learning with approximation spaces. Fundam. Inform.. 2009;71(23):323-349.
    [Google Scholar]
  13. Peters, J., Ramanna, S., 2007. Feature selection: a near set approach, in: ECML PKDD Workshop on Mining Complex Data, Warsaw, 2007, pp. 1–12.
  14. , , , . Nearness of objects: extension of approximation space model. Fundam. Inform.. 2007;79(34):497-512.
    [Google Scholar]
Show Sections