7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
7.2
CiteScore
3.7
Impact Factor
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
ABUNDANCE ESTIMATION IN AN ARID ENVIRONMENT
Case Study
Correspondence
Corrigendum
Editorial
Full Length Article
Invited review
Letter to the Editor
Original Article
Retraction notice
REVIEW
Review Article
SHORT COMMUNICATION
Short review
View/Download PDF

Translate this page into:

31 (
4
); 1137-1150
doi:
10.1016/j.jksus.2019.04.003

Bayesian estimation of the mixture of Burr Type-XII distributions using doubly censored data

Department of Statistics, Government College University, Faisalabad 38000, Pakistan
Department of Mathematics and Statistics, Riphah International University, Islamabad 44000, Pakistan
Department of Statistics, Quaid-i-Azam University, Islamabad 44000, Pakistan

⁎Corresponding author. tahirqaustat@yahoo.com (M. Tahir)

Disclaimer:
This article was originally published by Elsevier and was migrated to Scientific Scholar after the change of Publisher.

Peer review under responsibility of King Saud University.

Abstract

This study discusses the Bayesian and maximum likelihood estimation methods for analyzing the data from a 3-component mixture of Burr Type-XII probability distributions. The maximum likelihood estimators with their variances cannot be obtained in an explicit form and thus an iterative procedure is used to calculate them numerically. Contrary to this, elegant closed form algebraic expressions of Bayes estimators and their posterior risks are derived. Using the informative and noninformative priors, the posterior predictive distributions along with predictive intervals are also discussed. A method of eliciting hyperparameters using prior predictive distribution is also a part of this study. Some interesting properties (including, posterior risks and Bayesian predictive intervals) of Bayes estimators and their behavior across different sample sizes, left and right test termination times, informative prior versus Jeffreys noninformative prior, are provided via a detailed Monte Carlo simulation study. To assess the suitability and application of the proposed model, a real life data example is also discussed in this article. Based on the simulated results and real data application, it is concluded that the IP paired with DLF (SELF) is a more suitable for estimating mixing component.

Keywords

Doubly censoring scheme
Informative and non-informative priors
Posterior distribution
Predictive interval
Bayes estimator
Posterior risk
1

1 Introduction

Censoring is an important aspect of reliability studies. There are various types of censoring schemes, for example, doubly or interval censoring scheme, right and left censored sampling schemes, etc. Each of these can be used in practice depending on the problem under investigation. For example, zoologists mostly perform the experiments on different animals (rats or rabbits) to check the consequence of certain stimulus (drugs) on them. Fixed numbers of rats or rabbits received the drug and their behavior or survival time is noted. Some rats or rabbits may take a very long time to respond. However, instead of waiting till to record the survival time of all experimental units, the experimenter decides to stop recording when a fixed number of rats or rabbits have reacted to the stimulus. So the random sample of reaction times obtained by early termination of the experiment would be known as the right censored data. In the case of left censoring sampling scheme, an analyst can only have the information that the survival time is larger than the fixed censored time. Another very useful censoring scheme is obtained by combining both the left and the right censoring schemes and it is commonly known as the doubly censoring sampling scheme. For some recent studies on doubly censoring sampling for simple and mixture distributions, we refer to Khan et al. (2010), Kim and Song (2010), Pak et al. (2013), Feroze and Aslam (2014a,b), Sindhu et al. (2015) and the references cited therein.

We consider Burr Type-XII distribution because of its flexibility in terms of closed form cumulative distribution function and hazard function. Further, it is also known as the Singh-Maddala distribution and generalized log-logistic distribution. It is one of the most commonly distribution to model household income. The choice of this distribution in this study is its flexibility in terms model fitting. It combines a simple mathematical expression for cumulative frequency function with coverage in the skewness–kurtosis plane. Many well-known distributions, including the Weibull, exponential, logistic, generalized logistic, Gompertz, normal, extreme value, and uniform distributions, are special cases or limiting cases of the Burr system of distributions. See for details, Lewis (1981). Burr (1942) proposed a family of cumulative distribution functions which are very flexible to model different data and used in different practical fields such as household income, crop prices, insurance risk, travel time, flood levels, and failure data. Among the Burr family, Burr Type-X and XII have received a great attention of analysts, see, for example, Johnson et al. (1994). Burr (1968), Burr and Cislak (1968), Rodriguez (1977), Tadikamalla (1980), Economou and Caroni (2005), Soliman (2005). Saleem (2010) discussed the Bayesian estimation of the parameters of two-component mixture of Burr distributions under different priors. Panahi and Asadi (2011) addressed the problem of parameter estimation of Burr distribution using hybrid (Type-II) censoring data. Feroze and Aslam (2014a,b) presented the Bayesian analysis of two-component mixture of Burr Type-V distributions using different censoring schemes. Tripathi et al. (2018) used a doubly censored exponential distribution in the scenario of a linear parametric function. Feroze and Aslam (2018) presented an approximate Bayesian analysis of two-component mixture of Weibull distributions under doubly censored sampling scheme.

In practice, it is challenging to find homogeneous data and thus mixture distributions have natural applications due to their flexible structures over the traditional distributions. Motivated by this fact, this article discusses the Bayesian estimation of the 3-component mixture of Burr Type-XII distribution (3-CMBD) using doubly censored data. The probability density function (pdf) of a finite 3-CMBD with mixing proportions p 1 and p 2 is given as:

(1)
f y ; φ = p 1 φ 1 1 + y - φ 1 + 1 + p 2 φ 2 1 + y - φ 2 + 1 + 1 - p 1 - p 2 φ 3 1 + y - φ 3 + 1 , where 0 < y < , φ m > 0 , p m 0 , m = 1 2 p m 1 , φ = φ 1 , φ 2 , φ 3 , p 1 , p 2 and m = 1 , 2 , 3 .

The cumulative distribution function (cdf) of a finite 3-CMBD with mixing proportions p 1 and p 2 is given as follows: F y ; φ = 1 - p 1 1 + y - φ 1 - p 2 1 + y - φ 2 - 1 - p 1 - p 2 1 + y - φ 3 .

The rest of the article is arranged as follows: A 3-component mixture of Burr Type-XII distributions (3-CMBD) using doubly censored data, is given in Section 2. The ML estimators along with their variances and joint posterior distributions using the non-informative and the informative priors are derived in the same section. Further, the Bayesian estimation under different symmetric and asymmetric loss functions, the posterior predictive distribution and the Bayesian predictive intervals, a method to elicit hyperparameters, are also discussed in Section 2. Section 3 presents a discussion on simulated and the real data results. Finally, some concluding remarks are given in Section 4.

2

2 Material and methods

2.1

2.1 Doubly censored sampling scheme

Assume that n units from the 3-component mixture of the Burr Type-XII distributions, as defined in Eq.1, are obtained. Let y r , y r + 1 , . . . , y w be the ordered values that can only be collected because r - 1 smallest values and n - w largest values are assumed to be censored. Further, assume y 1 r 1 , . . . , y 1 w 1 , y 2 r 2 , . . . , y 2 w 2 and y 3 r 3 , . . . , y 3 w 3 are failed values belonging to subpopulation-I, subpopulation-II and subpopulation-III, respectively. The rest of the values which are greater than either y w or less than y r taken to be censored values from each component, such that y w = max y 1 w 1 , y 2 w 2 , y 3 w 3 and y r = min y 1 r 1 , y 2 r 2 , y 3 r 3 . Thus, s 1 = w 1 - r 1 + 1 , s 2 = w 2 - r 2 + 1 and s 3 = w 3 - r 3 + 1 failed values can be obtained from subpopulation-I, subpopulation-II and subpopulation-III, respectively. The remaining n - w - r + 3 values are unobserved or censored values, where, r = r 1 + r 2 + r 3 , w = w 1 + w 2 + w 3 and s = s 1 + s 2 + s 3 . So, the likelihood function of doubly censored sample, say y = y 1 = y 1 r 1 , . . . , y 1 w 1 , y 2 = y 2 r 2 , . . . , y 2 w 2 , y 3 = y 3 r 3 , . . . , y 3 w 3 , can be written as: L φ y i = r 1 w 1 p 1 f 1 y 1 i i = r 2 w 2 p 2 f 2 y 2 i i = r 3 w 3 1 - p 1 - p 2 f 3 y 3 i × F 1 y 1 r 1 r 1 - 1 F 2 y 2 r 2 r 2 - 1 F 3 y 3 r 3 r 3 - 1 1 - F y w n - w .

(2)
L φ y u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 φ 1 G 1 φ 2 G 2 φ 3 G 3 × exp - H 1 φ 1 exp - H 2 φ 2 exp - H 3 φ 3 p 1 J 1 p 2 J 2 1 - p 1 - p 2 J 3 , G 1 = s 1 J 1 = s 1 + n - w - u 4 H 1 = i = r 1 w 1 ln 1 + y 1 i + u 1 ln 1 + y 1 r 1 + n - w - u 4 ln 1 + y w G 2 = s 2 J 2 = s 2 + u 4 - u 5 H 2 = i = r 2 w 2 ln 1 + y 2 i + u 2 ln 1 + y 2 r 2 + u 4 - u 5 ln 1 + y w G 3 = s 3 J 3 = s 3 + u 5 H 3 = i = r 3 w 3 ln 1 + y 3 i + u 3 ln 1 + y 3 r 3 + u 5 ln 1 + y w

2.2

2.2 Maximum likelihood estimators and their variances

The maximum likelihood (ML) estimators for the unknown parameters φ = φ 1 , φ 2 , φ 3 , p 1 , p 2 of a 3-CMBD are obtained by solving the following non-linear Eqs. (3)–(7) simultaneously. Eqs. (3)–(7) are derived by partially differentiating the log-likelihood function ( l ) with respect to parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 , respectively.

(3)
l φ 1 = s 1 φ 1 - i = r 1 w 1 ln 1 + y 1 i - n - w ln 1 + y w p 1 1 + y w - φ 1 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 + r 1 - 1 ln 1 + y 1 r 1 1 + y 1 r 1 - φ 1 1 - 1 + y 1 r 1 - φ 1 = 0
(4)
l φ 2 = s 2 φ 2 - i = r 2 w 2 ln 1 + y 2 i - n - w ln 1 + y w p 2 1 + y w - φ 2 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 + r 2 - 1 ln 1 + y 2 r 2 1 + y 2 r 2 - φ 2 1 - 1 + y 2 r 2 - φ 2 = 0
(5)
l φ 3 = s 3 φ 3 - i = r 3 w 3 ln 1 + y 3 i - n - w ln 1 + y w 1 - p 1 - p 2 1 + y w - φ 3 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 + r 3 - 1 ln 1 + y 3 r 3 1 + y 3 r 3 - φ 3 1 - 1 + y 3 r 3 - φ 3 = 0
(6)
l p 1 = s 1 p 1 - s 3 1 - p 1 - p 2 + n - w 1 + y w - φ 1 - 1 + y w - φ 3 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 = 0
(7)
l p 2 = s 2 p 2 - s 3 1 - p 1 - p 2 + n - w 1 + y w - φ 2 - 1 + y w - φ 3 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 = 0

Using Mathematica software, the ML estimators of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 are obtained by solving Eqs. (3)–(7) using an iterative procedure. Similarly, the variances of the ML estimators are on the diagonal of the inverted information Fisher information matrix, whereas, φ ̂ N φ , I - 1 φ asymptotically. The Fisher information matrix is given by: I φ = - E 2 l φ 1 2 2 l φ 1 φ 2 2 l φ 1 φ 3 2 l φ 1 p 1 2 l φ 1 p 2 2 l φ 2 φ 1 2 l φ 2 2 2 l φ 2 φ 3 2 l φ 2 p 1 2 l φ 2 p 2 2 l φ 3 φ 1 2 l φ 3 φ 2 2 l φ 3 2 2 l φ 3 p 1 2 l φ 3 p 2 2 l p 1 φ 1 2 l p 1 φ 2 2 l p 1 φ 3 2 l p 1 2 2 l p 1 p 2 2 l p 2 φ 1 2 l p 2 φ 2 2 l p 2 φ 3 2 l p 2 p θ 1 2 l p 2 2 where

(8)
2 l φ 1 2 = - s 1 φ 1 2 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 n - w ln 1 + y w 2 p 1 1 + y w - φ 1 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 2 - r 1 - 1 ln 1 + y 1 r 1 2 1 + y 1 r 1 - φ 1 1 - 1 + y 1 r 1 - φ 1 2
(9)
2 l φ 2 2 = - s 2 φ 2 2 + p 1 1 + y w - φ 1 + 1 - p 1 - p 2 1 + y w - φ 3 n - w ln 1 + y w 2 p 2 1 + y w - φ 2 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 2 - r 2 - 1 ln 1 + y 2 r 2 2 1 + y 2 r 2 - φ 2 1 - 1 + y 2 r 2 - φ 2 2
(10)
2 l φ 3 2 = - s 3 φ 3 2 + p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 n - w ln 1 + y w 2 1 - p 1 - p 2 1 + y w - φ 3 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 2 - r 3 - 1 ln 1 + y 3 r 3 2 1 + y 3 r 3 - φ 3 1 - 1 + y 3 r 3 - φ 3 2
(11)
2 l p 1 2 = - s 1 p 1 2 - s 3 1 - p 1 - p 2 2 - n - w 1 + y w - φ 1 - 1 + y w - φ 3 2 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 2
(12)
2 l p 2 2 = - s 2 p 2 2 - s 3 1 - p 1 - p 2 2 - n - w 1 + y w - φ 2 - 1 + y w - φ 3 2 p 1 1 + y w - φ 1 + p 2 1 + y w - φ 2 + 1 - p 1 - p 2 1 + y w - φ 3 2

2.3

2.3 Posterior distributions using noninformative and informative priors

A noninformative prior (NIP) is used when no or very little prior knowledge about the parameter of interest is available. The most common NIP is the Jeffreys’ prior (JP). When definite knowledge of the parameter of interest is available, it is known as the informative prior (IP). In this study, we use both the Jeffreys NIP and gamma IP. Now, the question is why we use the gamma distribution, as an IP, for the mixture? As the Burr Type-XII distribution, is a skewed distribution, for the Bayesian estimation we use a skewed prior which may be useful to express experts knowledge in a better form. Also, the gamma distribution is a conjugate prior for the Burr Type-XII distribution, therefore, the computation will be much easier than a non-conjugate prior. Assuming doubly censored data, the joint posterior distributions of parameters are given in the following subsections.

2.3.1

2.3.1 Posterior distribution using the jeffreys’ prior

The rule for obtaining the JP, for the unknown component parameter φ m , using Fisher’s information I φ m is p φ m I φ m , where I φ m = - E 2 ln L φ m ; y m / φ m 2 , m = 1 , 2 , 3 . The uniform prior (UP) is taken for the unknown proportion parameter p s , i.e., p s U n i f o r m 0 , 1 . Assuming the independence of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 , the joint prior distribution is given in Eq. (15).

(13)
π 1 φ 1 φ 1 φ 2 φ 3 , φ m > 0 , p m 0 , m = 1 2 p m 1 , m = 1 , 2 , 3 .

Using the JP (Eq. (13)) and likelihood function (Eq. (2)), the joint posterior distribution of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 given data, say y, is given as: g 1 φ y = L φ y π 1 φ φ L φ y π 1 φ d φ , g 1 φ y = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 φ 1 G 1 - 1 φ 2 G 2 - 1 φ 3 G 3 - 1 exp - H 1 φ 1 exp - H 2 φ 2 exp - H 3 φ 3 p 1 J 1 p 2 J 2 1 - p 1 - p 2 J 3 , ω 1 = u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B J 1 + 1 , J 2 + 1 , J 3 + 1 .

The marginal posterior distributions of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 assuming the JP are obtained by integrating out the nuisance parameters, i.e., h 1 φ τ y = p 2 p 1 φ η φ π g 1 φ y d φ π d φ η d p 1 d p 2 h 1 φ τ y = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G π Γ G η H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2 φ τ G τ - 1 exp - H τ φ τ , φ τ > 0

where τ , π and η take the values as: (i) τ = 1 , π = 2 , η = 3 (ii) τ = 2 , π = 1 , η = 3 and (iii) τ = 3 , π = 1 , η = 2 . Also, B · , · is the usual beta function. h 1 p ξ y = p ε φ 3 φ 2 φ 1 g 1 φ y d φ 1 d φ 2 d φ 3 d p ε h 1 p ξ y = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 p ξ Δ 1 + 1 - 1 1 - p ξ υ 1 + J 3 + 2 - 1 , 0 < p ξ < 1

where ξ , ε , υ 0 v and Δ 0 v take the values as: (i) ξ = 1 , ε = 2 , υ 1 = J 2 , Δ 0 v = J 1 and (ii) ξ = 2 , ε = 1 , υ 0 v = J 1 , Δ 0 v = J 2 .

2.3.2

2.3.2 Posterior distribution using the informative prior

Here, gamma distribution with parameters a m and b m is assumed as an IP for the unknown parameters φ m , i.e., φ m G a m m a a m , b m , m = 1 , 2 , 3 . Also, we assume p 1 , p 2 B i v a r i a t e B e t a c 1 , c 2 , c 3 as an IP for the unknown proportion parameters p 1 and p 2 . The joint prior distribution, assuming independence among parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 , can be written as:

(14)
π 2 φ = b 1 a 1 b 2 a 2 b 3 a 3 φ 1 a 1 - 1 φ 2 a 2 - 1 φ 3 a 3 - 1 exp - b 1 φ 1 + b 2 φ 2 + b 3 φ 3 p 1 c 1 - 1 p 2 c 2 - 1 1 - p 1 - p 2 c 3 - 1 Γ a 1 Γ a 2 Γ a 3 B c 1 , c 2 , c 3 ,

Using Eq. (14) and the likelihood function (Eq. (2)), the joint posterior distribution of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 given data, say y , is obtained by the Bayes theorem. g 2 φ y = L φ y π 2 φ φ L φ y π 2 φ d φ , g 2 φ y = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 φ 1 G 1 + a 1 - 1 φ 2 G 2 + a 2 - 1 φ 3 G 3 + a 3 - 1 exp - H 1 + b 1 φ 1 exp - H 2 + b 2 φ 2 exp - H 3 + b 3 φ 3 p 1 J 1 + c 1 - 1 p 2 J 2 + c 2 - 1 1 - p 1 - p 2 J 3 + c 3 - 1 , where ω 2 = u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B J 1 + c 1 , J 2 + c 2 , J 3 + c 3 .

The marginal posterior distributions of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 using the IP are given below: h 2 φ τ y = p 2 p 1 φ η φ π g 2 φ y d φ π d φ η d p 1 d p 2 h 2 φ τ y = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G π + a π Γ G η + a η H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 φ τ G τ + a τ - 1 exp - H τ + b τ φ τ , φ τ > 0 h 2 p ξ y = p ε φ 3 φ 2 φ 1 g 2 φ y d φ 1 d φ 2 d φ 3 d p ε h 2 p ξ y = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 p ξ Δ 1 + c ξ - 1 1 - p ξ υ 1 + J 3 + c ε + c 3 - 1 , 0 < p ξ < 1

2.4

2.4 Bayesian estimation under symmetric and asymmetric loss functions

The choice of a suitable loss function is not only a very significant problem in decision theory but also in Bayesian inference. In decision theory, an expert faces two problems, the first is under-estimation and the second is over-estimation. If the extent of under-estimation and over-estimation is equal, symmetric loss functions are the best choice. But if either under-estimation or over-estimation is not required, asymmetric loss functions are preferred over symmetric loss functions. Thus, in this study, we use asymmetric and symmetric loss functions to study the appropriateness of loss functions for the model. The most frequently used a symmetric loss function is the squared error loss function (SELF). It provides better estimates in the case of under-estimation and over-estimation. Alternatively, the precautionary loss function (PLF) and the DeGroot loss function (DLF) are commonly used as asymmetric loss functions.

The Bayes estimator δ ̂ is obtained by minimizing the posterior risk defined as ρ δ ̂ = E φ y L φ , δ ̂ , where L φ , δ ̂ is the loss incurred estimating φ by δ ̂ . For a given prior, the general form of the Bayes estimators and their posterior risks are given in Table 1.

Table 1 Bayes estimators and posterior risks under SELF, PLF and DLF.
Loss Function Bayes Estimators Posterior Risks
SELF = L φ , δ ̂ = φ - δ ̂ 2 δ ̂ = E φ y φ ρ δ ̂ = E φ y φ 2 - E φ y φ 2
PLF = L φ , δ ̂ = φ - δ ̂ 2 δ ̂ δ ̂ = E φ y φ 2 ρ δ ̂ = 2 E φ y φ 2 - 2 E φ y φ
DLF = L φ , δ ̂ = φ - δ ̂ δ ̂ 2 δ ̂ = E φ y φ 2 E φ y φ ρ δ ̂ = 1 - E φ y φ 2 E φ y φ 2

Next, we present elegant closed form expressions of the Bayes estimators and respective posterior risks using the NIP (JP) and the IP under the above mentioned loss functions.

2.4.1

2.4.1 Bayes estimators and posterior risks under SELF

Using the posterior distributions derived under the JP and the IP, the Bayes estimators and posterior risks are given as:

(15)
φ ̂ τ JP = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 1 Γ G π Γ G η H τ - G τ + 1 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2
(16)
φ ̂ τ IP = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 1 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 1 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3
(17)
p ̂ ξ JP = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 2 , υ 1 + J 3 + 2
(18)
p ̂ ξ IP = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 1 , υ 1 + J 3 + c ε + c 3
(19)
ρ φ ̂ τ JP = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 2 Γ G π Γ G η H τ - G τ + 2 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2 - φ ̂ τ JP 2
(20)
ρ φ ̂ τ IP = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 2 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 2 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 - φ ̂ τ IP 2
(21)
ρ p ̂ ξ JP = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 3 , υ 1 + J 3 + 2 - p ̂ ξ JP 2
(22)
ρ p ̂ ξ IP = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 2 , υ 1 + J 3 + c ε + c 3 - p ̂ ξ IP 2 .

2.4.2

2.4.2 Bayes estimators and posterior risks under PLF

The Bayes estimators and their posterior risks using the JP and the IP under PLF are given below:

(23)
φ ̂ τ JP = 1 ω 1 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 2 Γ G π Γ G η H τ - G τ + 2 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2 1 2
(24)
φ ̂ τ IP = 1 ω 2 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 2 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 2 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 1 2
(25)
p ̂ ξ JP = 1 ω 1 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 3 , υ 1 + J 3 + 2 1 2
(26)
p ̂ ξ IP = 1 ω 2 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 2 , υ 1 + J 3 + c ε + c 3 1 2
(27)
ρ φ ̂ τ JP = 2 ω 1 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 2 Γ G π Γ G η H τ - G τ + 2 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2 1 2 - 2 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 1 Γ G π Γ G η H τ - G τ + 1 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2
(28)
ρ φ ̂ τ IP = 2 ω 2 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 2 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 2 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 1 2 - 2 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 1 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 1 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3
(29)
ρ p ̂ ξ JP = 2 ω 1 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 3 , υ 1 + J 3 + 2 1 2 - 2 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 2 , υ 1 + J 3 + 2
(30)
ρ p ̂ ξ IP = 2 ω 2 1 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 2 , υ 1 + J 3 + c ε + c 3 1 2 - 2 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 1 , υ 1 + J 3 + c ε + c 3

2.4.3

2.4.3 Bayes estimators and posterior risks under DLF

The Bayes estimators and respective posterior risks assuming the JP and the IP under DLF are given as:

(31)
φ ̂ τ JP = u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 2 Γ G π Γ G η H τ - G τ + 2 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 1 Γ G π Γ G η H τ - G τ + 1 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2
(32)
φ ̂ τ IP = u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 2 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 2 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 1 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 1 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3
(33)
p ̂ ξ JP = u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 3 , υ 1 + J 3 + 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 2 , υ 1 + J 3 + 2
(34)
p ̂ ξ IP = u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 2 , υ 1 + J 3 + c ε + c 3 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 1 , υ 1 + J 3 + c ε + c 3
(35)
ρ φ ̂ τ JP = 1 - u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 1 Γ G π Γ G η H τ - G τ + 1 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2 2 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + 2 Γ G π Γ G η H τ - G τ + 2 H π - G π H η - G η B J 1 + 1 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 2
(36)
ρ φ ̂ τ IP = 1 - u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 1 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 1 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 2 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G τ + a τ + 2 Γ G π + a π Γ G η + a η H τ + b τ - G τ + a τ + 2 H π + b π - G π + a π H η + b η - G η + a η B J 1 + c 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3
(37)
ρ p ̂ ξ JP = 1 - u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 2 , υ 1 + J 3 + 2 2 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 B υ 1 + 1 , J 3 + 1 B Δ 1 + 3 , υ 1 + J 3 + 2 .
(38)
ρ p ̂ ξ IP = 1 - u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 1 , υ 1 + J 3 + c ε + c 3 2 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 + a 2 Γ G 3 + a 3 H 1 + b 1 - G 1 + a 1 H 2 + b 2 - G 2 + a 2 H 3 + b 3 - G 3 + a 3 B υ 1 + c ε , J 3 + c 3 B Δ 1 + c ξ + 2 , υ 1 + J 3 + c ε + c 3 .

2.5

2.5 Posterior predictive distribution and Bayesian predictive intervals

An important aspect of the Bayesian inference is the posterior predictive distribution of future observations. This provides a flexible way to get a probabilistic idea of future behavior of the data. The posterior predictive distribution along with the predictive intervals of the future observation X = Y n + 1 given data, say y, is derived in this section.

2.5.1

2.5.1 Posterior predictive distribution

The posterior predictive distribution of X = Y n + 1 using the JP is obtained by:

(39)
f x y = φ f x φ g 1 φ y d φ ,

After some algebraic simplification, the posterior predictive distribution (39) is obtained as follows: f x y = 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + 1 Γ G 2 Γ G 3 H 1 + x - G 1 + 1 H 2 - G 2 H 3 - G 3 B J 1 + 2 , J 3 + 1 B J 2 + 1 , J 1 + J 3 + 3 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 + 1 Γ G 3 H 1 - G 1 H 2 + x - G 2 + 1 H 3 - G 3 B J 1 + 1 , J 3 + 1 B J 2 + 2 , J 1 + J 3 + 2 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 + 1 H 1 - G 1 H 2 - G 2 H 3 + x - G 3 + 1 B J 1 + 1 , J 3 + 2 B J 2 + 1 , J 1 + J 3 + 3 .

The posterior predictive distribution of X = Y n + 1 using the IP is given below: f x y = φ f x φ g 2 φ y d φ , f x y = 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 Γ G 3 H 2 - G 2 H 1 + b 1 + x - G 1 + a 1 H 3 - G 3 B J 1 + c 1 + 1 , J 3 + c 3 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 + 1 + 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 + a 2 Γ G 3 H 1 - G 1 H 2 + b 2 + x - G 2 + a 2 H 3 - G 3 B J 1 + c 1 , J 3 + c 3 B J 2 + c 1 + 1 , J 1 + J 3 + c 1 + c 3 + 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 + a 3 H 1 - G 1 H 2 - G 2 H 3 + b 3 + x - G 3 + a 3 B J 1 + c 1 , J 3 + c 3 + 1 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 + 1 .

2.5.2

2.5.2 Bayesian predictive interval

A 100 ( 1 - α ) % Bayesian predictive intervals (L, U) are derived by solving 0 L f x y d x = α 2 = U f x y d x ,

After some simplifications, the Bayesian predictive intervals (L, U) using the JP are: 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + 1 Γ G 2 Γ G 3 G 1 H 1 - G 1 - H 1 + L - G 1 H 2 - G 2 H 3 - G 3 B J 1 + 1 , J 3 B J 2 , J 1 + J 3 + 1 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 + 1 Γ G 3 G 2 H 1 - G 1 H 2 - G 2 - H 2 + L - G 2 H 3 - G 3 B J 1 , J 3 B J 2 + 1 , J 1 + J 3 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 + 1 G 3 H 1 - G 1 H 2 - G 2 H 3 - G 3 - H 3 + L - G 3 B J 1 , J 3 + 1 B J 2 , J 1 + J 3 + 1 = α 2 and 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + 1 Γ G 2 Γ G 3 G 1 H 1 + U - G 1 H 2 - G 2 H 3 - G 3 B J 1 + 1 , J 3 B J 2 , J 1 + J 3 + 1 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 + 1 Γ G 3 G 2 H 1 - G 1 H 2 + U - G 2 H 3 - G 3 B J 1 , J 3 B J 2 + 1 , J 1 + J 3 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 + 1 G 3 H 1 - G 1 H 2 - G 2 H 3 + U - G 3 B J 1 , J 3 + 1 B J 2 , J 1 + J 3 + 1 = α 2 .

Similarly, the Bayesian predictive intervals (L, U) using the IP are: 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 Γ G 3 G 1 + a 1 - 1 H 2 - G 2 H 3 - G 3 H 1 + b 1 - G 1 + a 1 - H 1 + b 1 + L - G 1 + a 1 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 + 1 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 + a 2 Γ G 3 G 2 + a 2 - 1 H 1 - G 1 H 3 - G 3 H 2 + b 2 - G 2 + a 2 - H 2 + b 2 + L - G 2 + a 2 B J 1 + c 1 , J 3 + c 3 B J 2 + c 1 + 1 , J 1 + J 3 + c 1 + c 3 + 1 ω 1 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 + a 3 G 3 + a 3 - 1 H 1 - G 1 H 2 - G 2 H 3 + b 3 - G 3 + a 3 - H 3 + b 3 + L - G 3 + a 3 B J 1 + c 1 , J 3 + c 3 + 1 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 + 1 = α 2 and 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 + a 1 Γ G 2 Γ G 3 G 1 + a 1 - 1 H 2 - G 2 H 3 - G 3 H 1 + b 1 + U - G 1 + a 1 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 + 1 + 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 + a 2 Γ G 3 G 2 + a 2 - 1 H 1 - G 1 H 3 - G 3 H 2 + b 2 + U - G 2 + a 2 B J 1 + c 1 , J 3 + c 3 B J 2 + c 1 + 1 , J 1 + J 3 + c 1 + c 3 + 1 ω 2 u 1 = 0 r 1 - 1 u 2 = 0 r 2 - 1 u 3 = 0 r 3 - 1 u 4 = 0 n - w u 5 = 0 u 4 k = 1 3 - 1 u k r k - 1 u k n - w u 4 u 4 u 5 Γ G 1 Γ G 2 Γ G 3 + a 3 G 3 + a 3 - 1 H 1 - G 1 H 2 - G 2 H 3 + b 3 + U - G 3 + a 3 B J 1 + c 1 , J 3 + c 3 + 1 B J 2 + c 2 , J 1 + J 3 + c 1 + c 3 + 1 = α 2 .

2.6

2.6 Elicitation of hyperparameters

According to Garthwaite et al. (2004), elicitation of hyperparameters is a method which is used to convert an expert’s prior knowledge and professional judgment about some unknown quantities of interest. To elicit hyperparameters, Aslam (2003) discussed a method using predictive probabilities and we use this method in this study. The first step is the derivation of the prior predictive distribution (PPD) for elicitation of hyperparameters, which is defined as follows:

(40)
p y = φ f y φ π 2 φ d φ

After substituting Eqs. (1) and (14) in Eq. (40), the simplified form of PPD is given as follows: p y = 1 a + b + c 1 + y a a 1 b 1 a 1 b 1 + ln 1 + y a 1 + 1 + b a 2 b 2 a 2 b 2 + ln 1 + y a 2 + 1 + c a 3 b 3 a 3 b 3 + ln 1 + y a 3 + 1 .

Next, since we have nine unknown hyperparameters, we solve nine integrals assuming the following intervals for the random variable Y , 0 < y 1 / 2 , 1 / 2 < y 1 , 1 < y 3 / 2 , 3 / 2 < y 2 , 2 < y 5 / 2 , 5 / 2 < y 3 , 3 < y 7 / 2 , 7 / 2 < y 4 and 4 < y 9 / 2 with probabilities 0.12, 0.26, 0.24, 0.15, 0.10, 0.05, 0.03, 0.02 and 0.01, respectively, given by an expert for the nine intervals. Using Mathematica software, these nine equations are solved simultaneously for eliciting the nine hyperparameters. The elicited values of hyperparameters are:

a 1 = 1.930747 , b 1 = 1.866852 , a 2 = 1.725742 , b 2 = 1.631218 , a 3 = 1.511704 , b 3 = 1.446841 , a = 4.219401 , b = 4.064265 and c = 3.876251 .

Next, we assess the performance of the Bayes estimators under different loss functions and prior distributions.

2.7

2.7 Monte Carlo simulation study

It is obvious that from Eqs. (8–12), (19–22), (27–30) and (35–38), that comparing the Bayes and the ML estimators using different priors and various loss functions analytically is almost impossible. Thus, a Monte Carlo simulation study is conducted to assess the performance of both methods. For parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 of the 3-CMBD, we simulated the Bayes estimates, posterior risks, ML estimates and their variances using the following steps.

  1. From a 3-CMBD, a sample of n observations is taken as:

  2. Generate a sample of p 1 n observations from 1st component density, i.e., f 1 y ; φ 1 .

  3. Generate a sample of p 2 n observations from 2nd component density, i.e., f 2 y ; φ 2

  4. Generate a sample of 1 - p 1 - p 2 n observations from 3rd component density, i.e., f 3 y ; φ 3 .

  5. Fix the lower and upper censoring times t1 and t2 and record the observations which fall between the censoring times.

  6. Evaluate the Bayes estimates and posterior risks using the recorded observations by solving Eqs. (15)–(38).

  7. Repeat steps 1–3 1000 times, for the desired sample size and test termination time.

  8. Calculate the ML estimates and their variances of parameters φ 1 , φ 2 , φ 3 , p 1 and p 2 using the non-linear system of equations.

We repeated steps 1–5 for sample sizes n = 40 , 8 0 , 14 0 with parameters φ 1 , φ 2 , φ 3 , p 1 , p 2 = 5 , 4 , 3, 0.4 , 0.3 and left and right test termination times y w , y r = 0.7 , 0.01 , 1.0 , 0.005 and resulting study is given in Tables 2–9.

Table 2 ML estimates (MLE) and variances of MLE (VMLE) with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 0.7 , 0.01 .
y w , y r n MLE and VMLE φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
0.7, 0.01 40 MLE 6.522982 4.698069 4.655453 0.410898 0.296946
VMLE 4.273749 2.428151 2.973820 0.005672 0.006278
80 MLE 5.833604 4.377799 4.351045 0.408381 0.300119
VMLE 2.189675 1.669317 1.949776 0.003588 0.003341
140 MLE 5.561876 4.260530 4.236539 0.407652 0.301335
VMLE 1.652717 1.408247 1.550785 0.002639 0.001715
Table 3 ML estimates (MLE) and variances of MLE (VMLE) with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 1.0 , 0.005 .
y w , y r n MLE and VMLE φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
1.0, 0.005 40 MLE 5.998793 4.508135 4.612950 0.408934 0.300410
VMLE 2.638199 1.960530 2.546950 0.005028 0.005560
80 MLE 5.568394 4.307766 4.312744 0.407661 0.301409
VMLE 1.611907 1.468071 1.703164 0.003116 0.002841
140 MLE 5.398727 4.226360 4.217486 0.406708 0.302419
VMLE 1.352530 1.288718 1.430004 0.002386 0.001514
Table 4 Bayes estimates (BE) and posterior risks (PR) under SELF with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 0.7 , 0.01 .
y w , y r n BE and PR φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
0.7, 0.01 40 BE (JP) 5.900496 4.518957 4.369751 0.409472 0.298520
PR (JP) 4.015250 2.360066 2.846096 0.005579 0.006248
BE (IP) 4.648018 3.422916 2.777409 0.442716 0.254001
PR (IP) 0.793832 0.903992 1.026911 0.005566 0.005294
80 BE (JP) 5.565640 4.338080 4.227686 0.407871 0.300936
PR (JP) 2.025971 1.654139 1.908080 0.003504 0.003300
BE (IP) 4.817899 3.684536 3.227035 0.424580 0.276713
PR (IP) 0.755523 0.809351 0.928294 0.003475 0.002956
140 BE (JP) 5.389894 4.247433 4.147544 0.407106 0.301512
PR (JP) 1.600949 1.342169 1.463096 0.002612 0.001675
BE (IP) 4.928436 3.822572 3.500824 0.417673 0.284235
PR (IP) 0.710499 0.741753 0.847639 0.002535 0.001533
Table 5 Bayes estimates (BE) and posterior risks (PR) under SELF with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 1.0 , 0.005 .
y w , y r n BE and PR φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
1.0, 0.005 40 BE (JP) 5.606593 4.267507 4.293476 0.408680 0.300512
PR (JP) 2.400866 1.873387 2.402651 0.004982 0.005556
BE (IP) 4.673099 3.451894 2.864660 0.435364 0.260935
PR (IP) 0.769657 0.848672 0.997485 0.004881 0.004911
80 BE (JP) 5.355632 4.202033 4.180521 0.407610 0.301934
PR (JP) 1.550461 1.436262 1.671857 0.003109 0.002839
BE (IP) 4.859147 3.731415 3.324364 0.423784 0.280649
PR (IP) 0.717494 0.750316 0.874852 0.003095 0.002643
140 BE (JP) 5.291393 4.164323 4.121504 0.406498 0.302945
PR (JP) 1.335436 1.277828 1.410477 0.002338 0.001482
BE (IP) 4.956557 3.859874 3.584780 0.415915 0.288578
PR (IP) 0.587117 0.670475 0.812637 0.001966 0.001444
Table 6 Bayes estimates (BE) and posterior risks (PR) under PLF with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 0.7 , 0.01 .
y w , y r n BE and PR φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
0.7, 0.01 40 BE (JP) 6.168996 4.753327 3.469761 0.420234 0.307378
PR (JP) 1.772385 1.399209 1.436460 0.024061 0.015672
BE (IP) 4.725848 3.498336 1.849969 0.449712 0.262942
PR (IP) 0.688817 0.671810 0.698800 0.020703 0.014969
80 BE (JP) 5.747690 4.411300 3.273446 0.414470 0.306114
PR (JP) 1.400748 1.217239 1.241449 0.014475 0.008301
BE (IP) 4.876269 3.740036 2.300495 0.430492 0.280895
PR (IP) 0.649071 0.623902 0.645191 0.012965 0.007947
140 BE (JP) 5.582474 4.248323 3.223464 0.409400 0.305292
PR (JP) 1.257294 1.147590 1.153788 0.010308 0.004984
BE (IP) 4.972076 3.852642 2.520284 0.422129 0.287054
PR (IP) 0.621533 0.602810 0.612949 0.009397 0.004741
Table 7 Bayes estimates (BE) and posterior risks (PR) under PLF with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 & y w , y r = 1.0 , 0.005 .
y w , y r n BE and PR φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
1.0, 0.005 40 BE (JP) 5.796433 4.449247 3.453396 0.419557 0.306416
PR (JP) 1.474291 1.284175 1.348579 0.020859 0.013902
BE (IP) 4.767559 3.603254 1.934340 0.438950 0.267514
PR (IP) 0.670746 0.653076 0.682582 0.018801 0.013628
80 BE (JP) 5.491982 4.295313 3.306051 0.412063 0.305669
PR (JP) 1.244522 1.156128 1.184194 0.012152 0.007155
BE (IP) 4.921157 3.776495 2.389084 0.428249 0.284137
PR (IP) 0.627154 0.603228 0.625122 0.011497 0.007062
140 BE (JP) 5.340183 4.203203 3.164154 0.408602 0.305166
PR (JP) 1.158203 1.104092 1.121307 0.008479 0.004317
BE (IP) 4.993777 3.972254 2.586150 0.419268 0.291404
PR (IP) 0.577445 0.583633 0.586950 0.007818 0.004137
Table 8 Bayes estimates (BE) and posterior risks (PR) under DLF with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 0.7 , 0.01 .
y w , y r n BE and PR φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
0.7, 0.01 40 BE (JP) 6.604126 4.868577 3.811861 0.425404 0.314023
PR (JP) 1.294500 1.141540 1.141475 0.106297 0.039667
BE (IP) 4.815848 3.554276 1.942059 0.460745 0.270120
PR (IP) 0.620077 0.580685 0.583350 0.078928 0.038233
80 BE (JP) 5.927580 4.505830 3.438736 0.418362 0.309558
PR (JP) 1.174500 1.087686 1.086395 0.064655 0.021499
BE (IP) 4.965249 3.808036 2.370035 0.436019 0.284000
PR (IP) 0.590114 0.555405 0.556896 0.052513 0.021375
140 BE (JP) 5.617354 4.386973 3.274624 0.411628 0.307214
PR (JP) 1.141723 1.049523 1.057276 0.047837 0.019893
BE (IP) 5.025196 3.835242 2.625334 0.425765 0.289270
PR (IP) 0.566972 0.534215 0.539903 0.036209 0.017499
Table 9 Bayes estimates (BE) and posterior risks (PR) under DLF with parameters φ 1 = 5 , φ 2 = 4 , φ 3 = 3 , p 1 = 0.4 , p 2 = 0.3 and y w , y r = 1.0 , 0.005 .
y w , y r n BE and PR φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 2
1.0, 0.005 40 BE (JP) 6.063593 4.566377 3.592436 0.424883 0.313953
PR (JP) 1.203739 1.110051 1.115692 0.089890 0.034917
BE (IP) 4.834369 3.590024 2.015110 0.458088 0.273865
PR (IP) 0.607025 0.572435 0.575876 0.071578 0.033972
80 BE (JP) 5.561242 4.353483 3.274411 0.417596 0.308140
PR (JP) 1.118486 1.067672 1.069311 0.052179 0.018432
BE (IP) 4.967407 3.837315 2.422554 0.432926 0.288568
PR (IP) 0.575338 0.547643 0.549400 0.045985 0.018089
140 BE (JP) 5.440693 4.251643 3.248274 0.409078 0.306499
PR (JP) 1.084409 1.050916 1.050694 0.035262 0.011620
BE (IP) 5.044287 3.900414 2.650680 0.423332 0.293353
PR (IP) 0.555533 0.535523 0.537218 0.032203 0.011498

3

3 Results and discussion

From the results reported in Tables 2–9, it is observed that the amount of over-estimation (and/or under-estimation) of all three components, and both proportion parameters through the ML estimators and the Bayes estimators is greater for a small sample size. Moreover, the degree of under-estimation (and/or over-estimation) of all three components, and both proportion parameters is small for large right test termination time y w and small left test termination time y r under a given sample size. The Bayes estimates tend to converge to the true parametric values by increasing sample size at fixed y w and y r . Also, the difference of the Bayes estimates from their assumed nominal values goes to zero with larger y w and smaller y r .

The posterior risk of the Bayes estimator and the variance of the ML estimator are important criteria for comparing the performance of different Bayes and ML estimators. It is to be noted that the performance of the Bayes estimators is better than the ML estimators due to small associated posterior risks than variances. Further, it can be seen that the variances of the ML estimators and posterior risks of the Bayes estimators are inversely proportional to sample size. Also, it is observed that the variances of the ML estimators and the posterior risks are greater for larger right test termination time y w and smaller left test termination time y r than smaller right test termination time y w and larger left test termination time y r .

It is observed that the among different priors and loss functions used in this study, the selection of the best prior and loss function is independent of test termination times and sample sizes. However, the selection of the best prior and loss function is done on the basis of minimum posterior risks. We observe that the IP is more efficient prior than the NIP because it yields the least posterior risk of the Bayes estimator for a given loss function. As far as the problem of choosing an appropriate loss function is concerned, the DLF (asymmetric loss function) outperforms than PLF and SELF for estimating φ 1 , φ 2 and φ 3 . Also, for estimating p 1 and p 2 , the SELF (symmetric loss function) is observed more better choice than PLF and DLF.

3.1

3.1 Real data application

Gómez et al. (2014) presented a real life data set on the lifetimes of Kevlar 373/epoxy which is a substance used to fix pressure at the 90% level of stress. The authors have shown successfully that the data can be modeled by the exponential distribution. The transformation y = 2 z of exponential data z yields the Rayleigh random variable y . Since there is a close connection between exponential and Burr Type-XII distribution, we transform the data and use to compute the ML and Bayesian estimators. To this end, the data are randomly grouped into three sets of values with 26 values belonging to subpopulation-1, 25 values belonging to subpopulation-2 and 25 values belonging to subpopulation-3. Values smaller than 0.5 and larger than 3.4 are assumed as censored values and thus, we have z r = min z 1 r 1 , z 2 r 2 , z 3 r 3 = 0.5 and z w = max z 1 w 1 , z 2 w 2 , z 3 w 3 = 3.4 . Similarly, s 1 = w 1 - r 1 + 1 = 19 , s 2 = w 2 - r 2 + 1 = 20 and s 3 = w 3 - r 3 + 1 = 19 of failed Kevlar can be observed from subpopulation-1, subpopulation-2 and subpopulation-3, respectively. The remaining n - w - r + 3 = 18 values are taken to be censored values, i.e., w - r + 3 = 58 are uncensored values, such that r = r 1 + r 2 + r 3 , w = w 1 + w 2 + w 3 and s = s 1 + s 2 + s 3 . The data can be summarized as given below: n 1 = 26 , r 1 = 4 , w 1 = 22 , s 1 = w 1 - r 1 + 1 = 19 , i = r 1 w 1 ln 1 + y 1 i = i = r 1 w 1 z 1 i = 30.5256 , n 2 = 25 , r 2 = 3 , w 2 = 22 , s 2 = w 2 - r 2 + 1 = 20 , i = r 2 w 2 ln 1 + y 2 i = i = r 2 w 2 z 2 i = 31.9514 , n 3 = 25 , r 3 = 3 , w 1 = 21 , s 3 = w 3 - r 3 + 1 = 19 , i = r 3 w 3 ln 1 + y 3 i = i = r 3 w 3 z 3 i = 29.7166 , n = n 1 + n 2 + n 3 = 76 , r = r 1 + r 2 + r 3 = 10 , w = w 1 + w 2 + w 3 = 65 and s = s 1 + s 2 + s 3 = 58 .

Since n - w - r + 3 = 18 , we have almost 23.68% doubly censored sample. ML estimates along with their variances, and the Bayes estimates with their respective posterior risks are presented in Table 10.

Table 10 ML estimates (MLE), variances of MLE (VMLE), Bayes estimates (BE) and posterior risks (PR) with a real life mixture data.
φ ̂ 1 φ ̂ 2 φ ̂ 3 p ̂ 1 p ̂ 1
MLE and VMLE
MLE 6.065421 5.724100 5.761472 0.344157 0.338021
VMLE 1.302156 1.582189 1.285327 0.003521 0.003171
BE and PR using JP and IP under SELF
BE (JP) 5.813710 5.520172 5.499120 0.344210 0.337781
PR (JP) 1.135181 1.241128 1.106516 0.0034912 0.003141
BE (IP) 4.164510 4.365891 4.470241 0.354210 0.312410
PR (IP) 0.350227 0.321281 0.402108 0.003109 0.002512
BE and PR using JP and IP under PLF
BE (JP) 6.0121045 5.689001 5.710275 0.354120 0.344120
PR (JP) 0.291543 0.257541 0.295128 0.014874 0.008801
BE (IP) 4.244187 4.454102 4.550812 0.368412 0.309128
PR (IP) 0.101029 0.114104 0.101541 0.009341 0.008615
BE and PR using JP and IP under DLF
BE (JP) 6.209124 5.891246 5.920158 0.356012 0.351102
PR (JP) 0.060450 0.061801 0.065807 0.025860 0.023091
BE (IP) 4.305710 4.529414 4.629148 0.369901 0.320241
PR (IP) 0.033120 0.034207 0.037841 0.019301 0.022019

It can be seen from the table that the results obtained by the real data are compatible with the simulated results, as provided in Section 2.7. Table 8 also shows that the Bayes estimators using the IP are relatively more precise as compared to the Bayes estimators using the NIP (JP) and ML estimators, i.e., IP results into the minimum posterior risks. Moreover, for estimating component (proportion) parameters, it can be seen that asymmetric DLF (symmetric SELF) performs better than the other considered loss functions.

The 90% Bayesian predictive intervals (L, U) are tabulated in Table 11 and it can be seen that using the IP, the 90% Bayesian predictive intervals are narrower than the case of JP.

Table 11 The 90% Bayesian predictive interval (L, U).
JP IP
L U L U
0.010232 1.183630 0.015505 1.026150

4

4 Conclusion

MLE is one of the most commonly used frequentist method in Statistics. However, one cannot incorporated prior information in this method. Contrary to this, Bayesian method is preferred if prior information is available because it provide formal mechanism to include relevant information. To calculate the MLEs we used the Newton Raphson method. This article provides a comparison of Bayes and ML estimators under different priors and loss functions. To model the heterogenous doubly censored data, the 3-CMBD is considered in this study. Using the NIP and the IP, elegant closed form expressions of the Bayes estimators and their posterior risks under different loss functions are derived. A Monte Carlo simulation has been conducted to judge the relative performance of the ML and the Bayes estimators. From the simulation study, it is observed that the ML method is totally dependent on the asymptotic log-likelihood function while Bayesian technique relies on the posterior distribution, and Bayesian technique always provides better results than the ML estimates. This is because of the prior information. The simulated results revealed that by increase the sample size or decrease in the left censoring time and increase in the right test termination time provides better ML and Bayes estimators. The effect of left and right test termination times y w , y r , parametric values and sample size on the ML and the Bayes estimators on the mixture component and proportion parameters is investigated and observed that it leads to under-estimation and/or over-estimation. Also, small sample size results in large over-estimation and/or under-estimation of component and proportion parameters at fixed left and right test termination times and vice versa. However, it is also observed that the under-estimation and/or over-estimation of parameters is quite small (large) with small left and large right (large left and small right) test termination times. On the other hand, as the sample size (and left and right test termination time) increases the variances of ML estimators and posterior risks of Bayes estimators decrease (increase) for fixed left and right test termination times. To address the problem of selecting an appropriate prior and a suitable loss function, we observed the posterior risks in the following order: IP < NIP under SELF and DLF for estimating proportion parameters and estimating component parameters. Finally, on the basis of our tabulated results, we conclude the IP along with DLF (asymmetric loss function) is a more compatible choice for estimating component parameters, whereas, the IP with SELF (symmetric loss function) is a more suitable option for estimating proportion parameters.

References

  1. , . An application of prior predictive distribution to elicit the prior density. J. Stat. Theory Appl.. 2003;2(1):70-83.
    [Google Scholar]
  2. , . Cumulative frequency functions. Ann. Math. Stat.. 1942;13:215-232.
    [Google Scholar]
  3. , . On a general system of distributions III, the sample range. J. Am. Stat. Assoc.. 1968;63:636-643.
    [Google Scholar]
  4. , , . On a general system of distributions I. Its curve-shape characteristics, II. The sample median. J. Am. Stat. Assoc.. 1968;63:627-635.
    [Google Scholar]
  5. , , . Graphical tests for the assumption of gamma and inverse Gaussian frailty distributions. Lifetime Data Anal.. 2005;11(4):565-582.
    [Google Scholar]
  6. , , . Bayesian analysis of doubly censored lifetime data using two-component mixture of Weibull distribution. J. Natl. Sci. Found. Sri Lanka. 2014;42(4):325-334.
    [Google Scholar]
  7. , , . Posterior analysis of Burr type-V distribution under different types of censoring. Pakistan J. Stat.. 2014;21:128-156.
    [Google Scholar]
  8. Garthwaite, H.P., Kadane, B.J., O’Hagan, A., 2004. Elicitation, Working Paper University of Sheffield, UK, available at http://www.stat.cmu.edu/tr/tr808/old808.pdf.
  9. , , , . A new extension of the Exponential distribution. Revista Colombiana de Estadística. 2014;37:25-34.
    [Google Scholar]
  10. , , , . Continuous Univariate Distributions. New York: John Wiley & Sons; .
  11. , . The Burr Distribution as a General Parametric Family in Survivorship and Reliability Theory Applications. USA: University of North Carolina at Chapel Hill; . PhD Thesis
  12. , , , . Predictive inference from a two-parameter Rayleigh life model given a doubly censored sample. Commun. Statistics-Theory Methods. 2010;39:1237-1246.
    [Google Scholar]
  13. , , . Bayesian estimation of the parameters of the generalized exponential distribution from doubly censored samples. Stat. Pap.. 2010;51:583-597.
    [Google Scholar]
  14. , , . Approximate Bayesian analysis of doubly censored samples from mixture of two Weibull distributions. Commun. Stat.-Theory Methods 2018:1-17.
    [Google Scholar]
  15. , , , . On estimation of Rayleigh scale parameter under doubly type-II censoring from imprecise data. J. Data Sci.. 2013;11:305-322.
    [Google Scholar]
  16. , , . Analysis of the type II hybrid censored Burr type-XII distribution under linex loss function. Appl. Math. Sci.. 2011;5(79):3929-3942.
    [Google Scholar]
  17. , . A guide to the Burr type-XII distributions. Biometrika. 1977;64:129-134.
    [Google Scholar]
  18. Saleem, M., 2010. Bayesian Analysis of Mixture Distributions, Ph.D. Dissertation, Department of Statistics, Quaid-i- Azam University, Islamabad, Pakistan.
  19. , , , . Analysis of doubly censored Burr type-II distribution: a Bayesian look. Electron. J. Appl. Stat. Anal.. 2015;8:154-169.
    [Google Scholar]
  20. , . Estimation of parameters of life from progressively censored data using Burr XII Model. IEEE Trans. Reliab.. 2005;54:34-42.
    [Google Scholar]
  21. , . A look at the Burr and related distributions. Int. Stat. Rev.. 1980;48:337-344.
    [Google Scholar]
  22. , , , , . Estimating a linear parametric function of a doubly censored exponential distribution. Statistics. 2018;52(1):99-114.
    [Google Scholar]
Show Sections