Next Article in Journal
Effect of Voids and Internal State Variables in Elasticity of Porous Bodies with Dipolar Structure
Previous Article in Journal
Dynamic Characteristics Analysis of Gear-Bearing System Considering Dynamic Wear with Flash Temperature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Objective Bayesian Estimation for Tweedie Exponential Dispersion Process

1
School of Transportation and Logistics, East China Jiaotong University, Nanchang 330013, China
2
School of Aeronautical Manufacturing Engineering, Nanchang Hangkong University, Nanchang 330063, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(21), 2740; https://doi.org/10.3390/math9212740
Submission received: 30 August 2021 / Revised: 29 September 2021 / Accepted: 6 October 2021 / Published: 28 October 2021

Abstract

:
An objective Bayesian method for the Tweedie Exponential Dispersion (TED) process model is proposed in this paper. The TED process is a generalized stochastic process, including some famous stochastic processes (e.g., Wiener, Gamma, and Inverse Gaussian processes) as special cases. This characteristic model of several types of process, to be more generic, is of particular use for degradation data analysis. At present, the estimation methods of the TED model are the subjective Bayesian method or the frequentist method. However, some products may not have historical information for reference and the sample size is small, which will lead to a dilemma for the frequentist method and subjective Bayesian method. Therefore, we propose an objective Bayesian method to analyze the TED model. Furthermore, we prove that the corresponding posterior distributions have nice properties and propose Metropolis–Hastings algorithms for the Bayesian inference. To illustrate the applicability and advantages of the TED model and objective Bayesian method, we compare the objective Bayesian estimates with the subjective Bayesian estimates and the maximum likelihood estimates according to Monte Carlo simulations. Finally, a case of GaAs laser data is used to illustrate the effectiveness of the proposed methods.

1. Introduction

With today’s advanced technology, most products are highly reliable. For these highly reliable products, it is not easy to evaluate their lifetime distribution by using traditional life-testing procedures, which record only time to failure data [1]. Even using the procedures incorporating censoring and accelerating techniques, the information about the lifetime distribution is still very limited [2]. In this case, an alternative approach is to collect the degradation data in order to analyze a product’s reliability. Compared to the lifetime data, the degradation data provide more valuable information on product failure behavior for making quick reliability assessments and other logistical decisions [3,4].
When analyzing degradation data, an important problem is how to establish an appropriate degradation model that can capture the true degradation process of a product in the field [5]. In practice, stochastic dynamics is the most common characteristic involved in the degradation process due to the uncertainties in the product’s working environments, random errors in measurements, and the individual variability of the products in a population. Stochastic process models have great potential for capturing stochastic dynamics within degradation processes. Thus, stochastic process modeling-based degradation analysis is favored by many researchers [6,7,8]. The Wiener, Gamma, and Inverse Gaussian (IG) processes are three commonly used stochastic process models [9,10]. The Wiener process is suitable for modeling non-monotonic degradation data [11]. The Gamma process [12] and IG process [13] are suitable for modeling monotonic degradation data. Although these stochastic process models could fit most of degradation data well, these three well-known processes are not suitable in some engineering applications. For example, a discrete-type compound Poisson process may be more appropriate to model a leakage current of thin gate oxides in nanotechnology [14]. Hence, a more general class of degradation model is necessary for describing the real degradation data more accurately.
To promote the adaptability of the modeling method for degradation data, the Tweedie Exponential Dispersion (TED) process was proposed to describe the degradation process of products [15]. The TED process is a generalized stochastic process, which includes the Wiener, Gamma, and IG processes as the special cases. Hence, it is reasonable to use the TED process to model the degradation paths of some products.
Until now, the research methods for the TED model are the frequentist approaches [16,17] or the subjective Bayesian method [18,19]. However, the objective Bayesian method has also many advantages in statistical analysis [20,21]. The most appealing feature of the objective Bayesian approach is the use of noninformative priors [22]. Jeffreys’ prior and the reference prior are the two most often used noninformative priors. Jeffreys’ prior has an invariance property for the prior probability in estimation problems [20]. The reference prior can approximately describe the inferential content of the data without incorporating any other information [21].
The objective Bayesian method has been applied in the analysis of degradation models, and sometimes performs better than the frequentist approaches and the subjective Bayesian method, especially in cases of small sample size. For example, He et al. [23] employed the objective Bayesian method to study an Inverse Gaussian degradation model. The numerical results show that the proposed objective Bayesian estimates perform better than the maximum likelihood estimation (MLE) and the Bootstrap methods in terms of the mean squared error (MSE) and the frequentist coverage probability. Guan et al. [24] used the objective Bayesian method to estimate the parameters of the Wiener process. The simulation results reveal that the performances of objective Bayesian method are better than that of MLE and subjective Bayesian estimators in terms of the rate of convergence, time efficiency, and coverage probabilities, especially in the case of small sample size. For more details, see [25].
This paper aims to develop an objective Bayesian method for the TED process model: compared with the existing work, the major contribution of this paper lies in the following three aspects: (1) Noninformative priors, including Jeffreys prior and the reference prior, are provided, which solves the problem of how to choose an appropriate prior for the TED model without historical data in small samples; (2) The proposed priors are proven to have proper posterior distributions and probability matching properties; and (3) The corresponding Bayesian inference is obtained by using Metropolis–Hastings (MH) algorithm.
The remainder of this article is organized as follows. In Section 2, the TED model is introduced. In Section 3, the Jeffreys prior and reference priors under different ranking groups are derived. In Section 4, the posterior properties are discussed and the Metropolis–Hastings (MH) algorithm for different priors to estimate the model parameters is proposed. In Section 5, the effectiveness of the proposed objective Bayesian method is verified by Monte Carlo simulation. In Section 6, the proposed method is applied to analyze real degradation data. Finally, our conclusions are given in Section 7.

2. TED Model

A stochastic process is defined as an exponential dispersion (ED) degradation process Y t , t 0 if satisfying the following three properties [15]:
(1) Y 0 = 0 with probability one;
(2) Y t , t 0 has stationary and independent increments on non-overlapping intervals, that is, Y t 2 Y t 1 and Y t 4 Y t 3 are independent for 0 t 1 < t 2 < t 3 < t 4 ;
(3) The increment Δ Y = Y t + Δ t Y t follows an ED distribution ED μ Δ t , λ , that is,
Y t + Δ t Y t ED μ Δ t , λ for Δ t > 0 ,
where the probability density function (PDF) of ED distribution ED μ t , λ is:
f Δ y | μ , λ = c ( y | λ , Δ t ) exp λ y θ μ t κ θ μ ,
where μ is the mean drift rate and λ is the dispersion parameter; c ( ) is a canonical function, guaranteeing that the cumulative distribution function (CDF) of Equation (2) is normalized and equal to one; κ ( ) is called the cumulant function, which is a twice differentiable function, and satisfying κ θ τ θ = μ , in which κ θ is the first derivative of κ ( θ ) with respect to θ .
The mean and the variance of Y t are, respectively, given by
E Y t = μ t ,
and
Var Y t = V μ t λ = κ θ t λ = κ τ 1 μ t λ ,
where V μ = κ θ = κ τ 1 μ is the second derivatives of κ θ with respect to θ , and is called the unit variance function.
The TED model is an important class of the ED model with power variance functions, that is,
V μ = μ p , p , 0 1 , ,
where p is a power classification parameter.
For the TED model, the function κ θ can be obtained by solving the equations κ τ 1 μ = d μ d θ | θ = τ 1 μ = μ p , and the solution can be expressed as:
κ θ μ = μ 2 p 2 p , p 2 ln μ , p = 2 .
Then, the canonical parameter θ μ can be expressed as
θ μ = μ 1 p 1 p , p 1 ln μ , p = 1 ,
Specific values of p correspond to specific models: p = 0 corresponds to the Wiener process; p = 2 corresponds to the Gamma process; p = 3 corresponds to the IG process; 1 < p < 2 corresponds to Compound Poisson process. Moreover, the TED model does not exist for all values of p in the interval (1, 2) [15,16,17]. Table 1 gives the transform relationships between the TED model and those common processes.
For the TED model, the PDF has no closed form except for some special values [17]. According to the previous research [16,17,18], the saddle-point approximation method (SAM) provides a highly accurate approximation for TED model. Therefore, we adopt SAM to obtain the approximated PDF of TED, which is expressed as
f y ; μ , λ λ 2 π t 1 p y p · exp ( λ t 2 d y ; μ ) ,
where
d y ; μ = 2 y t ln y μ t y t μ ,   p = 1 2 ln μ t y + y μ t 1 , p = 2 . 2 y 2 p t p 2 1 p 2 p y μ 1 p 1 p t + μ 2 p 2 p , p 1 , 2

3. Noninformation Priors

In Bayesian inference, the prior distribution plays an important role. A reasonable prior distribution can improve the accuracy of Bayesian estimation [26]. However, sometimes we do not have any prior information or it is difficult to obtain the prior information about the parameters, which will lead to the dilemma of choosing a reasonable prior distribution [27]. To overcome this problem, the noninformation priors are proposed. Jeffreys’ prior and the reference prior are two widely used noninformation priors. The procedure of objective Bayesian estimates based on Jeffreys’ prior and the reference prior is as follows:
Step 1: Derive the Fisher information matrix of the TED model because knowledge of Fisher’s information matrix is necessary to determine Jeffreys’ prior and the reference prior. (see Section 3.1).
Step 2: Derive the objective priors: Jeffreys’ prior and the reference prior based on the derived Fisher information matrix (see Section 3.2).
Step 3: Analyze whether the derived Jeffreys’ prior and reference prior are the probability matching prior (see Section 3.3).
Step 4: Analyze whether the posterior distributions derived from the objective priors are proper. That is, verify whether the integrals of the posterior distributions are finite (see Section 4.1).
Step 5: Generate Markov Chain Monte Carlo (MCMC) samples according to Metropolis–Hastings (MH) algorithm, because it is difficult to obtain the explicit expression of the marginal posterior distribution for parameter μ . Furthermore, the objective Bayesian estimates of parameters can be obtained based on generated MCMC samples (see Section 4.2).

3.1. Fisher’s Information Matrix

Suppose that there are n units tested in the degradation test. Let m i be the number of measurements for the ith unit. Let Y ( t ij ) be the degradation value of the ith unit at the measurement time t ij , i = 1 , 2 , , n , j = 1 , 2 , m i . The degradation increment between t ij and t i , j 1 is denoted by Δ y ij = Y t i , j Y t i , j 1 , i = 1 , 2 , , n , j = 1 , 2 , m i , and Y t i , 0 = 0 . Assume that y ij ED μ t ij , λ . Thus, the degradation increment Δ y ij ED μ Δ t ij , λ . Then, the PDF of degradation increment Δ y ij is
f Δ y ij ; μ , λ = λ 2 π Δ t ij 1 p Δ y ij p exp λ Δ t ij 2 d Δ y ij ; μ ,
where
d ( Δ y i j ; μ ) = 2 Δ y i j Δ t i j ln Δ y i j μ Δ t i j ( Δ y i j Δ t i j μ ) , p = 1 2 ln μ Δ t i j Δ y i j + Δ y i j μ Δ t i j 1 , p = 2 2 Δ y i j 2 p Δ t i j p 2 ( 1 p ) ( 2 p ) Δ y i j μ 1 p ( 1 p ) Δ t i j + μ 2 p 2 p , p 1 , 2 .
Therefore, the likelihood function is
L = i = 1 n j = 1 m i f Δ y ij ; μ , λ ,
Then, the log-likelihood function is
l = ln L = i = 1 n j = 1 m i 1 2 [ ln λ ln 2 π 1 p ln Δ t ij p ln Δ y ij λ Δ t ij d Δ y ij ; μ ] .
Taking the first partial derivatives of the log-likelihood Function (11) with respect to parameters λ and µ, and solving these equations, we can have
μ ^ M = B C λ ^ M = 1 p 2 p N 2 Z
where N = i = 1 n m i , A = i = 1 n j = 1 m i Δ y ij 2 p Δ t ij p 1 ,   B = i = 1 n y im i ,   C = i = 1 n t im i , and Z = A B 2 p × C p 1 .
Substituting the MLEs μ ^ M and λ ^ M into formula (11), and maximizing this profile loglikelihood function or minimizing the negative profile log-likelihood function through one-dimensional search, then the MLE p ^ M can be obtained. In this paper, the MATLAB function “FMINSEARCH” is used to find the minimum value of negative profile log-likelihood function.
Furthermore, we calculate the Fisher information matrix based on the log-likelihood function. However, according to the research in [17], it is difficult to derive the elements with respect to p in the Fisher information matrix. Because that the second-order partial derivative of the log-likelihood function with respect to the parameter p is very complicated. Then, the corresponding expectations cannot be obtained, which will lead to the subsequent uninformative priors that cannot be obtained. Therefore, we just derive the objective priors for the parameters λ and μ .
Suppose the parameter vector Θ = ( λ , μ ) . The second-order partial derivative of log-likelihood function ln L with respect to λ and µ can be expressed as:
2 ln L λ 2 = 1 2 λ 2 2 ln L λ μ = 2 ln L μ λ = B μ p C μ 1 p 2 ln L μ 2 = λ C μ p .
Furthermore,
E ( 2 ln L λ 2 ) = 1 2 λ 2 E ( 2 ln L λ μ ) = E ( 2 ln L μ λ ) = 0 E ( 2 ln L μ 2 ) = c λ μ p
where E · is the expectation.
Therefore, the Fisher information matrix of Θ is
I λ , μ = 1 2 λ 2 0 0 c λ μ p = ( I 11 I 12 I 21 I 22 ) ,
Based on the Fisher information matrix, we can obtain the objective priors of the parameters. In addition, according to the asymptotic normality, the Fisher information matrix can also be used to construct asymptotic confidence intervals (ACIs) for the parameters Θ . The MLEs ( λ ^ M , μ ^ M ) is approximately bivariate normal distribution with mean ( λ , μ ) and variance-covariance matrix I 1 ( λ , μ ) , which is expressed as
I 1 ( λ , μ ) = 2 λ 2 0 0 μ p C λ = var ( λ ^ M ) cov ( λ ^ M , μ ^ M ) cov ( λ ^ M , μ ^ M ) var ( μ ^ M )
Therefore, the 100 ( 1 ξ ) % ACIs for the parameters λ and μ are, respectively, expressed as
[ λ ^ M z ξ / 2 var ( λ ^ M ) , [ λ ^ M + z ξ / 2 var ( λ ^ M ) ] [ μ ^ M z ξ / 2 var ( μ ^ M ) , [ μ ^ M + z ξ / 2 var ( μ ^ M ) ] ,
where z ξ / 2 is the upper percentile of the standard normal distribution with right-tail probability ξ / 2 .

3.2. Jeffreys’ and Reference Priors

Jeffreys’ prior is proportional to the square root of the determinant of the Fisher information matrix. Additionally, it is invariant under one-to-one transformation of parameters. In particular, Jeffreys’ prior has many optimality properties in the absence of nuisance parameters for the regular models where asymptotic normality holds [27,28]. The following theorems give the Jeffreys’ prior of parameters λ , μ
Theorem 1.
The Jeffreys’ prior of parameters λ , μ is
π J λ , μ μ p λ .
Proof. 
According to the Fisher information Matrix (15), we have
| I ( λ , μ ) | = C μ p 2 λ
According to the definition of Jeffreys’ prior, that is,
π J λ , μ I λ , μ = c μ p 2 λ μ p λ .
Thus, (16) holds. □
Besides Jeffreys’ prior, the reference prior also plays an important role in the objective Bayesian analysis. It was proposed by Bernardo [29], and its idea is to maximize the expectation of Kullback–Leibler divergence between the prior and posterior distributions. The posterior distribution based on the reference prior has many nice properties, such as invariance, consistency under marginalization and consistent sampling properties [27,30]. If there is only one parameter in the model, the reference prior is equal to Jeffreys’ prior. If there are multiple parameters in the model, the reference prior is typically not equal to Jeffreys’ prior. Before the derivation of the reference prior, the parameters need to be sorted by descending interest. The ordering is used to reflect the degree of importance of the different parameters. If the reference priors are the same for different orders, then the reference prior is robust to the order of parameters [21,30]. Now, we derive the reference priors based on different parameters’ order.
Theorem 2.
(1) 
Under the ordering λ , μ , the reference prior for λ , μ is
π R λ , μ 1 λ μ p .
(2) 
Under the ordering μ , λ , the reference prior is also π R λ , μ .
Proof. 
The proof is given in Appendix A. □
According to Theorem 2, the reference prior is irrelevant to the order of the parameters for the TED model. That is, in this case, there is a unique reference prior. The reference prior is robust to the order of parameters.

3.3. Probability Matching Prior

The probability matching prior is proposed by [31]. The Bayesian credible sets based on this prior have either exactly or approximately validate frequentist coverage probabilities [31,32]. In this subsection, we verify whether the derived Jeffreys’ prior and reference prior are probability matching priors.
Suppose that the parameters vector θ = ( θ 1 , θ 2 ) , and θ 1 is the parameter of interest. Based on the probability matching prior, the credible interval for θ 1 has a coverage error O n 1 in the frequentist sense, i.e.,
p θ 1 θ 1 1 α X = 1 α O n 1 ,
where θ 1 1 α X is the 1 α th quantile of the posterior distribution of θ 1 . The priors that satisfy (18) are defined as the probability matching priors.
To obtain the probability matching prior, we list two conclusions from [23]:
(1) For the parameters vector θ = θ 1 , θ 2 , θ 1 is the parameter of interest, and θ 2 is nuisance parameters. The Fisher information matrix is
I θ 1 , θ 2 = I 11 I 12 I 21 I 22 .
Then, based on the interest parameter θ 1 , the probability match prior π θ 1 , θ 2 is the solution of the following equation:
θ 2 ( I 12 π θ 1 , θ 2 I 22 W ) θ 1 ( π θ 1 , θ 2 W ) = 0
where W = I 11 I 12 2 / I 22 .
(2) Furthermore, if θ 1 and θ 2 are orthogonal, i.e., I 12 = I 21 = 0 , then, the above Equation (20) is simplified as
θ 1 π θ 1 , θ 2 W = 0
According to the above two conclusions, we have the following theorem.
Theorem 3.
(1) 
The Jeffreys’ prior π J λ , μ is not probability matching prior for λ , but is the probability matching prior for µ.
(2) 
The reference prior π R λ , μ is probability matching prior for λ and μ .
Proof. 
(1)
According to Fisher information Matrix (15), we have
I 12 = I 21 = 0 .
For the parameter λ, we can obtain W = 1 2 λ 2 based on Fisher information Matrix (15). Furthermore,
λ π J λ , μ W λ λ μ p = 1 2 λ μ p 0 .
Therefore, the Jeffreys prior π J λ , μ is not satisfied Equation (21), that is, the Jeffreys’ prior π J λ , μ is not probability matching prior for λ .
For the parameter µ, we can obtain W = c λ μ p . Thus,
μ π J λ , μ W μ λ = 0 .
According to Equation (21), the Jeffreys’ prior π J λ , μ is probability matching prior for parameter μ .
(2) Similarly, it can be easily verified that the reference prior π R λ , μ satisfies Equation (21) for λ and μ . □

4. Posterior Analysis

4.1. Posterior Distribution

In order to make sense of posterior inferences, we need to validate whether the corresponding posterior distributions are proper or not.
According to the Bayesian formula, the joint posterior distribution of Θ = λ , μ based on Jeffreys’ prior π J λ , μ is
π J λ , μ | X = L ( λ , μ | X ) π J λ , μ 0 0 L ( λ , μ | X ) π J λ , μ d μ d λ .
Furthermore, simplifying the above joint posterior distribution (22), then
π J ( λ , μ | X ) L ( λ , μ | X ) π J ( λ , μ ) μ p 2 λ N 1 2 exp λ 2 i = 1 n j = 1 m i Δ t i j d ( Δ y i j ; μ ) .
Theorem 4.
Based on the Jeffreys’ priors π J λ , μ , the joint posterior distributions π λ , μ | X is proper.
Proof. 
The proof is given in Appendix B. □
From (23), under the Jeffreys’ priors π J λ , μ , the posterior distributions of λ and µ are, respectively,
π J λ μ , X G a N + 1 2 , A 1 p 2 p B μ 1 p 1 p + C μ 2 p 2 p π J μ λ , X μ p 2 exp λ A 1 p 2 p B μ 1 p 1 p + C μ 2 p 2 p
where G a a , b represents a gamma distribution with scale parameter 1 / b and shape parameter a .
Similarly, under the reference prior π R λ , μ , the joint posterior distributions of Θ is
π R ( λ , μ | X ) L ( λ , μ | X ) π R ( λ , μ ) μ p 2 λ N 2 2 exp λ 2 i = 1 n j = 1 m i Δ t ij d Δ y ij ; μ .
Theorem 5.
Based on the reference priors π R λ , μ , the posterior distribution (25) is proper.
Proof. 
The proof is given in Appendix C. □
From (25), under the reference priors π R λ , μ , the posterior distributions of λ and µ are, respectively,
π R ( λ | μ , X ) G a ( N 2 , A ( 1 p ) ( 2 p ) B μ 1 p 1 p + C μ 2 p 2 p ) π R ( μ | λ , X ) μ p 2 exp λ ( A ( 1 p ) ( 2 p ) B μ 1 p 1 p + C μ 2 p 2 p )

4.2. Sampling Algorithm

The objective Bayesian estimates for parameters λ and µ are the expectations of the posterior distribution. However, it is observed that the posterior distribution of µ does not belong to any known parametric family from the posterior distributions (25) and (26). Additionally, it is difficult to obtain the explicit expression of the marginal posterior density of µ. Therefore, we apply the MH algorithms to generate MCMC samples because that can flexibly generate samples from any proposed distribution.
Under Jeffreys’ prior, the detailed procedures of the objective Bayesian estimates are as follows. For the reference prior, the procedures are similar.
Step 1: Choose an initial value Θ 0 = λ 0 , μ 0 and set i = 1 ;
Step 2: For Jeffreys’ prior, extract λ i from gamma distribution
G a N 2 , A 1 p 2 p B u 1 p 1 p + C u 2 p 2 p
Step 3: Generate μ i from π J u i 1 λ i , X using MH algorithm with proposal distribution N μ i 1 , σ μ 2 as follows:
Step 3.1: Generate a proposal number μ from normal distribution N μ i 1 , σ μ 2 ;
Step 3.2: Generate a random number U μ from standard uniform distribution
U 0 , 1 ;
Step 3.3: Calculate the acceptance probability
r μ = min ( 1 , π J μ | λ i , X π J μ i 1 | λ i , X ) .
Step 3.4: Obtain the sample μ i according to the following guidelines:
μ i = μ , if   U μ r μ μ i 1 , if   U μ > r μ .
Step 3.5: Set i = i + 1 .
Step 4: Repeat Step 2–3 D (a very large number, e.g., D = 10 , 000 ) times, and obtain λ i and μ i , i = 1 , 2 , , D .
Step 5: In order to guarantee the convergence and to remove the affection of the selection of initial values, the first D 0 simulated varieties are discarded. Then, the remaining D D 0 samples are λ i and μ i , i = D 0 + 1 , D 0 + 2 , , D , which are used to obtain the Bayesian estimators. Therefore, based on the Jeffreys’ prior, the Bayesian estimates of λ and µ, respectively, are
λ ^ BJ = λ D 0 + 1 λ D 0 + 2 λ D D D 0 μ ^ BJ = μ D 0 + 1 μ D 0 + 2 μ D D D 0 .
Step 6: To obtain the credible interval of λ and μ , order λ i and μ i for i = D 0 + 1 , D 0 + 2 , , D , and denoted as λ D 0 + 1 , λ D 0 + 2 , , λ D and μ D 0 + 1 , μ D 0 + 2 , , μ D , respectively. Then, based on the Jeffreys’ prior, the 100(1−α)% Bayesian credible intervals (CIs) of λ and µ are given, respectively, by
λ JLow , λ JUp = λ D D 0 α / 2 , λ D D 0 1 α / 2 μ JLow , μ JUp = μ D D 0 α / 2 , μ D D 0 1 α / 2 .
Similarly, based on the reference priors, we can obtain the Bayesian estimates λ ^ BR and μ ^ BR and CIs λ R L o w , λ R U p and μ RLow , μ RUp .

5. Simulation Study

In this section, we adopt the Monte Carlo simulation to analyze the performance of the objective Bayesian estimates. It is assumed that the initial value of the degradation process is zero, that is Y t = 0 = 0 , each unit is measured m = 15 times and Δ t = 1 . The threshold ω = 30 . Two cases of values for the parameters λ , μ in the TED model are set here:
Case   I :   λ = 0.2 , μ = 1 ;
Case   II :   λ = 0.5 , μ = 1.5 .

5.1. Comparison of Point Estimators

The objective Bayesian estimators under the Jeffreys’ and reference priors are compared with MLE in Section 3.1 and subjective Bayesian estimates in [19] in terms of the average biases (ABs) and mean square errors (MSEs) to find the most efficient estimation method. In [19], the subjective prior distributions of parameters are assumed to be λ N ( a , b ) and μ N ( c , d ) , the hyperparameters ( a , b , c , d ) are determined subjectively. We adopt two subjective priors. One is π s 1 λ , μ : λ N ( 0.2 , 0.1 ) , μ N ( 1 , 0.2 ) , for which the mean is equal to the true value of parameters in Case I, i.e., E λ = 0.2 , E μ = 1 ; the other is π s 2 ( λ , μ ) : λ N ( 0.5 , 0.3 ) , μ N ( 2 , 0.2 ) , for which the mean is equal to the true value of parameters in Case II, i.e., E λ = 0.5 , E μ = 2 .
According to the values of the parameters, the samples can be generated. For each simulated sample, we calculate the MLE, the objective Bayesian estimate and the subjective Bayesian estimate. At the same time, both the absolute bias and the square error between the estimated value and the real value are calculated. We repeat these procedures Q ( = 6000 ) times. Then, the corresponding average of the estimated parameters can be obtained. Taking the AB and MSE of parameter μ as an example, the expression is
A B μ = 1 Q i = 1 Q | μ ^ i μ |   and   M S E μ = 1 Q i = 1 Q ( μ ^ i μ ) 2 Q .
Table 2 and Table 3 present the estimated results for Cases I and II, respectively.
From Table 2 and Table 3, it is observed that
(1) As anticipated, the ABs (MSEs) for all estimates of the parameters become smaller and smaller as the sample sizes increase. That is to say, the performance continues to improve as the sample sizes increase. Additionally, it is observed that the ABs (MSEs) become closer and closer to 0 as the sample sizes increase. Therefore, the estimates are asymptotically unbiased.
(2) Compared with MLE and subjective Bayesian estimates, the objective Bayesian estimates under π J λ , μ and π R ( λ , μ ) have smaller ABs and MSEs in most situations. Especially, the objective Bayesian estimators under reference prior π R λ , μ have the smallest ABs and MSEs. At the same time, considering that the reference prior has many nice properties, such as invariance, consistency under marginalization, consistent sampling properties and so on [27,33], and π R λ , μ is probability matching prior for λ and μ . Therefore, it is recommended to adopt the reference prior π R λ , μ to make an inference for the TED model.
(3) In Case I, the ABs and MSEs under the subjective prior π s 1 ( λ , μ ) are smaller than those under the subjective prior π s 2 λ , μ because the mean of the subjective prior π s 1 λ , μ is equal to the true value. In Case II, the same situation exists. Therefore, the value of the hyperparameter of the subjective prior is important to the subjective Bayesian estimators and should be reasonably determined.

5.2. Comparison of Confidence Intervals

The performances of the confidence intervals for the objective Bayesian method are compared with the ACI and subjective Bayesian method in terms of width of confidence interval (WCI) and coverage probability (CP). For each simulated sample, the confidence interval can be obtained with 95% confidence level. Furthermore, we calculate the length of the interval and check whether the interval covers the true value of the parameter. Repeat these procedures Q ( = 6000 ) times. Then, the WCI is obtained as the average length of all confidence intervals and the CP is the number of confidence intervals that cover the true values divided by Q. The WCIs and CPs based on Case I and Case II are presented in Table 4 and Table 5, respectively. Due to the space limitation, we only present the results with 95% confidence level here.
From Table 4 and Table 5,
(1) It is observed that the WCIs become smaller and the CPs are nearer to 0.95 as the sample size n increases. That is, the performance of interval estimations improves as the sample sizes increase.
(2) For the different methods of interval estimate, the performances of objective Bayesian estimates are better than the others in terms of WCIs and CPs in most of situations. Especially, the objective Bayesian estimators under the reference prior π R λ , μ have the shortest WCIs and are closest to 0.95. Therefore, the reference prior π R λ , μ is recommended to adopt for the TED model.
(3) In studying the effect of hyperparameter of the subjective prior on the efficiency of the Bayesian credible interval, we try two subjective prior distributions with different values of λ and μ . In Case I, the mean of the subjective prior distribution π s 1 λ , μ is equal to the true value, and then the performances based on the subjective prior π s 1 λ , μ are better than those based on the subjective prior π s 2 λ , μ in terms of WCIs and CPs. In Case II, the mean of the subjective prior π s 2 λ , μ is equal to the true value, then the performances based on the subjective prior π s 2 λ , μ are better than those based on the subjective prior π s 1 λ , μ . That is to say, a subjective prior distribution with reasonable information about the parameters improves the performance of the Bayesian credible interval.

6. An Illustrative Example

In this section, a real example of GaAs lasers is used to illustrate the performance of the proposed objective Bayesian estimates. The degradation data of GaAs lasers are taken from Table C17 of Meeker and Escobar [34]. For a GaAs laser device, the percent of operating current will increase over time. When the percent of operating current of the GaAs laser reaches at a predefined threshold ω , then the device is considered to failure. In this real example, there are 15 GaAs lasers tested at 80 °C. The initial operating current of all the GaAs lasers is 0, i.e., y t i 0 = 0 , i = 1 , 2 , 15 . During the experiment, the percent of operating current increase and are recorded every 250 h until 4000 h. Thus, m 1 = m 2 = = m n m = 16 . The GaAs laser is considered to failure when the operating current increases to 10%, i.e., ω = 10 % . Figure 1 shows the degradation paths of the operating current of 15 GaAs lasers.
This degradation dataset has been analyzed in some references [17,35]. Peng [35] compared seven degradation models, and found that the IG process is the best one according to Akaike’s information criterion (AIC). Furthermore, Xu [17] adopt the TED process to model this degradation data, and they used the MLE to obtain the point estimates of the parameters. In addition, compared with the Wiener process, gamma process, and IG process, they found that the TED model provided the best fitting effect than these three commonly used degradation models. Following Xu [17], we use the TED model to fit this degradation data. Then, under the Jeffreys’ prior and the reference prior, we obtain the objective Bayesian estimates of parameters by using the MH algorithm. The initial values of parameters are chosen as λ = 3 , μ = 0.5 . The number of iterations is 15,000, i.e., D = 15,000 , and the first D 0 = 5000 simulated samples are discarded. The remaining samples are then used for the objective Bayesian estimates. Figure 2 and Figure 3 show the sampling process of the parameters based on the Jeffreys’ prior and reference prior, respectively. From Figure 2 and Figure 3, the trace graphs reveal random scatters around the mean values depicted by solid lines for the simulated values of parameters. That is, these plots signify the convergence of the MH algorithm.
Table 6 shows the estimates of parameters by using the MLE and objective Bayesian estimate. From Table 6, the estimators based on different methods are relatively close when n = 5. Furthermore, we compare the estimates of the parameters for the different methods in the case of small sample sizes. We randomly select 5, and 10 GaAs lasers from 15 tested devices, respectively. Then, the corresponding estimators can be obtained based on the selected GaAs lasers by using different methods. The estimated parameters are also shown in Table 6. It is observed that the estimators become closer and should be better as the sample size increases. The objective Bayesian estimators perform better than the MLE. Especially in the case of small sample size (e.g., n = 5), the MLE is greatly affected by the sample size, and its volatility is relatively large. In contrast, the objective Bayesian estimate is robust. The results reveal the advantage of the objective Bayesian estimators. Thus, we recommend using the objective Bayesian estimate for the TED model, especially in the case of small sample sizes.
Furthermore, the TED model is compared with the three commonly used stochastic process models in terms of the log-likelihood function value and Akaike information criterion (AIC). The model with the smallest AIC and largest log-likelihood function value is considered as the best model. In theory, since these three stochastic processes are special cases of the TED process, the performance of the TED model should be better. The results are shown in Table 7.
From Table 7, as expected, the TED model with p = 2.8511 has the maximal value of log-likelihood function and smallest AIC. Therefore, the TED model can be seen as the best degradation model for the GaAs laser data. In addition, it is observed that the value of AIC based on the IG process is close to that based on the TED model. This is because that the estimated parameter p = 2.8511 , which means that the TED model is close to IG model ( p = 3 ). At the same time, considering the IG model is simpler than the TED model. Therefore, the IG process is also suitable for this laser data.
Furthermore, the parameter p p , 0 1 , is a power classification parameter. That is, the different values of parameter p represent different stochastic process models. Besides the three commonly used stochastic process models, the TED model also includes some other stochastic process models when p 0 , 2 , 3 . In order to demonstrate the influence of p, we plot the AIC with respect to parameter p, which is shown in Figure 4. From Figure 4, the curve of AIC varies significantly with parameter p. That is, the parameter p has a great impact on the AIC. Thus, p is an important parameter for the TED model, and should be determined carefully. Additionally, for the three commonly used stochastic process models, we should analyze the applicability of these models before conducting degradation data analysis. Therefore, the degradation data of GaAs lasers is more suitable when using the TED model with our proposed objective Bayesian methodology than some commonly used degradation models.

7. Conclusions

The TED model is a generalized degradation model, including Wiener, gamma, and IG processes and so on. For the statistical inference of TED model, the existing research mainly focuses on the subjective Bayesian or frequentist methods. However, in applications, sometimes it is difficult to obtain prior information for the unknown parameters, or there is no prior information. To overcome these problems, an objective Bayesian estimate can be used. In this study, two non-informative priors, Jeffreys# prior and the reference prior, are adopted to obtain the objective Bayesian estimates for the TED model, and the corresponding posterior properties are also discussed.
In theory, we prove that the posterior distributions based on Jeffreys’ prior and the reference prior are proper. Furthermore, we prove that the Jeffreys’ prior π J ( λ , μ ) is not the probability matching prior for λ , but is the probability matching prior for µ, and the reference prior π R λ , μ is the probability matching prior for λ and μ . In addition, the reference prior has some nice properties, such as invariance under one-to-one transformation and consistency under marginalization. At the same time, the reference prior is probability matching prior for λ and μ . Thus, the reference prior π R λ , μ is recommended to make inferences for the TED model.
To further illustrate the proposed method and its effects, we compare the performances of the objective Bayesian method with MLE and subjective Bayesian methods based on the Monte Carlo simulation. The results show that the performances under the reference prior are better than those of subjective Bayesian estimates and MLE and in terms of the average biases, MSE, width of confidence interval and coverage probability, especially in the case of small sample size. These results reveal the advantages of the reference prior. In addition, the performances of subjective Bayesian estimators with mis-specified hyperparameters are the worst. These results indicate that the prior distributions and their hyperparameters should be reasonably determined. Finally, the proposed objective Bayesian methodology is fully illustrated using a real degradation dataset, demonstrating that the TED model is effective to describe the degradation process and has a wider range of applications.
In summary, our study contributes three aspects. First, we derive the Jeffreys’ prior and reference prior of the TED model. This solves the problem of how to choose an appropriate prior for the TED model without historical data, especially in cases of small sample size. Second, the proposed priors are proven to have proper posterior distributions and probability matching properties. Third, the corresponding Bayesian inference is obtained by using MH algorithm. At the same time, the simulation results reveal that the proposed method provided more accurate estimate than the subjective Bayesian estimate MLE.
In future studies, the objective Bayesian estimates can be considered for the TED model with random effects, the TED model under accelerated degradation test and the entropy of the TED model, etc.

Author Contributions

Conceptualization, W.Y.; methodology, W.Y. and S.Z.; software, W.Y. and S.Z.; validation, W.Y. and S.Z.; formal analysis, W.L. and Y.Y.; investigation, W.Y. and S.Z.; writing—original draft preparation, W.Y. and S.Z.; writing—review and editing, W.L. and Y.Y.; visualization, S.Z.; supervision, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 72071099, 71861011), Jiangxi Provincial Postdoctoral Research Funding Project (No. 2019KY35), and Graduate Student Innovation Project of Jiangxi Province (No. YC 2020-S332).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 2.
(1)
We firstly prove the reference prior for the ordering (λ,μ). According to Fisher information matrix I ( λ , μ ) , we have
π ( μ | λ ) = I 22 = C λ μ p
Assuming that a i , b i a i + 1 , b i + , where lim i a i = 0 ,   lim i b i = , Then
1 a i b i π μ | λ d μ = 1 c λ 1 / 2 ( 2 2 p ) ( b i 1 p / 2 a i 1 p / 2 ) k i ( λ )
π μ | λ a i b i π μ | λ d μ = μ p / 2 2 2 p b i 1 p / 2 a i 1 p / 2 p μ | λ
Thus
π i ( λ ) = exp 1 2 a i b i p i ( μ | λ ) ln | I ( μ | λ ) | I 22 ( μ | λ ) d μ = exp 1 2 a i b i μ p / 2 ( 2 2 p ) ( b i 1 p / 2 a i 1 p / 2 ) ln 1 2 λ 2 d μ = exp 1 2 ln 1 2 λ 2 a i b i μ p / 2 ( 2 2 p ) ( b i 1 p / 2 a i 1 p / 2 ) d μ = exp ( 1 2 ln 1 2 λ 2 ) 1 λ
Therefore
π R ( λ , μ ) = lim i k i ( λ ) π i ( λ ) k i ( 1 ) π i ( 1 ) π ( μ | λ ) = 1 ( C λ ) 1 / 2 ( 2 2 p ) ( b i 1 p / 2 a i 1 p / 2 ) 1 λ 1 C 1 / 2 ( 2 2 p ) ( b i 1 p / 2 a i 1 p / 2 ) 1 C λ μ p
1 λ μ p .
(2) Next, we prove the reference prior for the ordering μ , λ . According to Fisher information matrix I λ , μ , we have
π ( λ | μ ) = I 11 = 1 2 λ 2 = 1 λ 1 2
Assuming that a i , b i a i + 1 , b i + , where lim i a i = 0 ,   lim i b i = , Then
k i μ = 1 a i b i π λ | μ d λ = 1 1 2 ln b i ln a i .
p i ( λ | μ ) = π ( λ | μ ) a i b i π ( λ | μ ) d λ = 1 λ ln b i ln a i
Thus
π i ( μ ) = exp 1 2 a i b i p i ( λ | μ ) ln | I ( μ | λ ) | I 22 ( μ | λ ) d λ = exp 1 2 a i b i 1 λ ln b i ln a i ln ( C λ μ p ) d λ = exp 1 2 ln ( C μ p ) + 1 4 ln b i + ln a i C μ p a i b i 4
Therefore
π R λ , μ = lim i k i μ π i μ k i 1 π i 1 · π λ | μ   = C μ p a i b i 4 C a i b i 4 · 1 λ 1 2 1 λ μ p .
This completes the proof of Theorem 2. □

Appendix B

Proof of Theorem 4.
0 + 0 + π J ( λ , μ | X ) d λ d μ 0 + 0 + π J ( λ , μ ) L ( λ , μ ) d λ d μ
0 + μ p 2 0 + λ ( N + 1 ) / 2 1 exp λ i = 1 n j = 1 m i Δ t i j ( Δ y i j / Δ t i j ) 2 p ( 1 p ) ( 2 p ) ( Δ y i j / Δ t i j ) μ 1 p 1 p + u 2 p 2 p d λ d μ
= 0 + u p Γ ( N + 1 2 ) i = 1 n j = 1 m i ( Δ y i j 2 p Δ t i j p 1 ( 1 p ) ( 2 p ) Δ y i j u 1 p 1 p + Δ t i j u 2 p 2 p ) ( N + 1 ) / 2 d μ
= 0 + u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ
= 0 B c u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ + B C + u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ
When p ( , 0 ] ( 1 , 2 ) ,   then
0 B c u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ + B C + u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ
< 0 B c u p Γ ( N + 1 2 ) [ a ( 1 p ) ( 2 p ) ] ( N + 1 ) / 2 d μ + B C + u p Γ ( N + 1 2 ) [ A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ] ( N + 1 ) / 2 d μ < + .
The last inequality in the above formula holds because
u p [ A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ] ( N + 1 ) / 2 = O ( u ( p 2 ) N / 2 1 ) , u + .
When p 2 , + , then
0 B C u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ + B C + u p Γ ( N + 1 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) ( N + 1 ) / 2 d μ
    = 0 + Γ ( N + 1 2 ) u p ( A u p ( 1 p ) ( 2 v ) B u 1 v + C u 2 2 v ) N + 1 2 u ( N + 1 ) p / 2 d μ = 0 + u p n / 2 Γ ( N + 1 2 ) ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) ( N + 1 ) / 2 d μ = 0 B C u p n / 2 Γ ( N + 1 2 ) ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) ( N + 1 ) / 2 d μ + B C + u p n / 2 Γ ( N + 1 2 ) ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) ( N + 1 ) / 2 d μ < + .
The last inequality in the above formula holds because
μ p n / 2 Γ ( N + 1 2 ) ( A μ p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) ( N + 1 ) / 2 = ο ( u p / 2 ) , u +
Consequently, we have
0 + 0 + π J ( λ , μ | X ) d λ d μ < + .
This implies that the posterior distribution π J λ , μ | X is proper. □

Appendix C

Proof of Theorem 5.
When p ( , 0 ] ( 1 , 2 ) , then
0 + 0 + π R ( λ , μ | X ) d λ d μ 0 + 0 + π R ( λ , μ ) L ( λ , μ ) d λ d μ
0 + μ p 2 0 + λ N 2 1 exp λ ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) d λ d μ
= 0 + u p Γ ( N / 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) N / 2
= 0 B C u p Γ ( N / 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) N / 2 d μ + B C + u p Γ ( N / 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) N / 2 d μ
< 0 B C u p Γ ( N / 2 ) ( A ( 1 p ) ( 2 p ) ) N / 2 d μ + B C + u p Γ ( N / 2 ) ( A ( 1 p ) ( 2 p ) B u 1 p 1 p + C u 2 p 2 p ) N / 2 d μ < + .
The last inequality in the above formula holds because
u p A 1 p 2 p B u 1 p 1 p + Cu 2 p 2 p N 2 = O μ p N 1 / 2 N , μ + ,
When p ( 2 , + ) ,   then
0 + 0 + π R ( λ , μ | X ) d λ d μ = 0 + Γ ( N / 2 ) u p ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) N / 2 u p N / 2 d μ
= 0 + u p ( n 1 ) / 2 Γ ( N / 2 ) ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) N / 2 d μ
= 0 B C u p ( n 1 ) / 2 Γ ( N / 2 ) ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) N / 2 d μ + B C + u p ( n 1 ) / 2 Γ ( N / 2 ) ( A u p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) N / 2 d μ < + .
The last inequality in the above formula holds because
μ p ( n 1 ) / 2 Γ ( N / 2 ) ( A μ p ( 1 p ) ( 2 p ) B u 1 p + C u 2 2 p ) N / 2 = O ( u p / 2 ) , u + .
Consequently, we have
0 + 0 + π R λ , μ | date d λ d μ < + .
This implies that the joint posterior distribution π R ( λ , μ | X ) is proper. □

References

  1. Liao, C.-M.; Tseng, S.-T. Optimal Design for Step-Stress Accelerated Degradation Tests. IEEE Trans. Reliab. 2006, 55, 59–66. [Google Scholar] [CrossRef]
  2. Chen, D.G.; Lio, Y.; Ng, H.K.T.; Tsai, T.R. Statistical Modeling for Degradation Data; Springer: Singapore, 2017. [Google Scholar]
  3. Kang, R.; Gong, W.; Chen, Y. Model-driven degradation modeling approaches: Investigation and review. Chin. J. Aeronaut. 2020, 33, 1137–1153. [Google Scholar] [CrossRef]
  4. Li, S.; Chen, Z.; Liu, Q.; Shi, W.; Li, K. Modeling and Analysis of Performance Degradation Data for Reliability Assessment: A Review. IEEE Access 2020, 8, 74648–74678. [Google Scholar]
  5. Zhang, Z.; Si, X.; Hu, C.; Lei, Y. Degradation data analysis and remaining useful life estimation: A review on Wiener-process-based methods. Eur. J. Oper. Res. 2018, 271, 775–796. [Google Scholar] [CrossRef]
  6. Yan, W.; Riahi, H.; Benzarti, K.; Chlela, R.; Curtil, L.; Bigaud, D. Durability and Reliability Estimation of Flax Fiber Reinforced Composites Using Tweedie Exponential Dispersion Degradation Process. Math. Probl. Eng. 2021, 2021, 6629637. [Google Scholar] [CrossRef]
  7. Nikulin, M.S.; Limnios, N.; Balakrishnan, N.; Kahle, W.; Huber-Carol, C. Advances in Degradation Modeling: Applications to Reliability, Survival Analysis and Finance; Springer/Birkhauser: Boston, MA, USA, 2010. [Google Scholar]
  8. Xiao, M.; Zhang, Y.; Li, Y.; Wang, W. Degradation Modeling Based on Wiener Process Considering Multi-Source Heterogeneity. IEEE Access 2020, 8, 160982–160994. [Google Scholar] [CrossRef]
  9. Yan, W.; Liu, W.; Kong, W. Reliability evaluation of PV modules based on exponential dispersion process. Energy Rep. 2021, 7, 3023–3032. [Google Scholar] [CrossRef]
  10. Ye, Z.-S.; Xie, M. Stochastic modelling and analysis of degradation for highly reliable products. Appl. Stoch. Model. Bus. Ind. 2015, 31, 16–32. [Google Scholar] [CrossRef]
  11. Yan, W.; Bigaud, D.; Chaibati, N.M.; Izoret, L. Optimization of Accelerated Destructive Degradation Testing of Cementitious Materials for Their Performances Qualification under Aggressive Environments: The Case of Carbonation. Math. Probl. Eng. 2020, 2020, 5295627. [Google Scholar] [CrossRef] [Green Version]
  12. Cholette, M.E.; Yu, H.; Borghesani, P.; Ma, L.; Kent, G. Degradation modeling and condition-based maintenance of boiler heat ex-changers using Gamma processes. Reliab. Eng. Syst. Saf. 2019, 183, 184–196. [Google Scholar]
  13. Ye, Z.-S.; Chen, N. The Inverse Gaussian Process as a Degradation Model. Technometrics 2014, 56, 302–311. [Google Scholar] [CrossRef]
  14. Hsieh, M.-H.; Jeng, S.-L. Accelerated Discrete Degradation Models for Leakage Current of Ultra-Thin Gate Oxides. IEEE Trans. Reliab. 2007, 56, 369–380. [Google Scholar] [CrossRef]
  15. Tseng, S.-T.; Lee, I.-C. Optimum Allocation Rule for Accelerated Degradation Tests with a Class of Exponential-Dispersion Degradation Models. Technometrics 2016, 58, 244–254. [Google Scholar] [CrossRef]
  16. Duan, F.; Wang, G. Exponential-Dispersion Degradation Process Models With Random Effects and Covariates. IEEE Trans. Reliab. 2018, 67, 1128–1142. [Google Scholar] [CrossRef]
  17. Zhou, S.; Xu, A. Exponential Dispersion Process for Degradation Analysis. IEEE Trans. Reliab. 2019, 68, 398–409. [Google Scholar] [CrossRef]
  18. Chen, Z.; Pan, E.; Xia, T.; Li, Y. Optimal degradation-based burn-in policy using Tweedie exponential-dispersion process model with measurement errors. Reliab. Eng. Syst. Saf. 2019, 195, 106748. [Google Scholar] [CrossRef]
  19. Chen, Z.; Xia, T.; Li, Y.; Pan, E. Tweedie exponential dispersion processes for degradation modeling, prognostic, and accelerated degradation test planning. IEEE Trans. Reliab. 2020, 69, 887–902. [Google Scholar]
  20. Berger, J. The case for objective Bayesian analysis. Bayesian Anal. 2006, 1, 385–402. [Google Scholar] [CrossRef]
  21. Berger, J.O.; Bernardo, J.M. On the Development of Reference Priors Metods. Bayesian Stat. 1992, 4, 35–60. [Google Scholar]
  22. Guan, Q.; Tang, Y.; Xu, A. Reference Bayesian analysis of inverse Gaussian degradation process. Appl. Math. Model. 2019, 74, 496–511. [Google Scholar] [CrossRef]
  23. He, D.; Wang, Y.; Chang, G. Objective Bayesian analysis for the accelerated degradation model based on the inverse Gaussian process. Appl. Math. Model. 2018, 61, 341–350. [Google Scholar] [CrossRef]
  24. Guan, Q.; Tang, Y.; Xu, A. Objective Bayesian analysis accelerated degradation test based on Wiener process models. Appl. Math. Model. 2016, 40, 2743–2755. [Google Scholar] [CrossRef]
  25. He, D.; Tao, M. Statistical analysis for the doubly accelerated degradation Wiener model: An objective Bayesian approach. Appl. Math. Model. 2020, 77, 378–391. [Google Scholar]
  26. Ghosh, M. Objective Priors: An Introduction for Frequentists. Stat. Sci. 2011, 26, 187–202. [Google Scholar] [CrossRef]
  27. Shakhatreh, M.K.; Dey, S.; Alodat, M. Objective Bayesian analysis for the differential entropy of the Weibull distribution. Appl. Math. Model. 2020, 89, 314–332. [Google Scholar] [CrossRef]
  28. Consonni, G.; Fouskakis, D.; Liseo, B.; Ntzoufras, I. Prior Distributions for Objective Bayesian Analysis. Bayesian Anal. 2018, 13. [Google Scholar] [CrossRef]
  29. Bernardo, J.M. Reference Posterior Distributions for Bayesian Inference. J. R. Stat. Soc. Ser. B Stat. Methodol. 1979, 41, 113–128. [Google Scholar] [CrossRef]
  30. Bernardo, J.M. Reference Analysis. In Handbook of Statistics; Elsevier: Amsterdam, The Netherlands, 2005; Volume 25, pp. 17–90. [Google Scholar]
  31. Welch, B.L.; Peers, H.W. On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods. J. R. Stat. Soc. Ser. B Stat. Methodol. 1963, 25, 318–329. [Google Scholar] [CrossRef]
  32. Tibshirani, R. Noninformative Priors for One Parameter of Many. Biometrika 1989, 76, 604. [Google Scholar] [CrossRef]
  33. Ramos, P.L.; Louzada, F.; Ramos, E. Posterior Properties of the Nakagami-m Distribution Using Noninformative Priors and Applications in Reliability. IEEE Trans. Reliab. 2017, 67, 105–117. [Google Scholar] [CrossRef]
  34. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; Wiley: New York, NY, USA, 1998. [Google Scholar]
  35. Peng, C.-Y. Inverse Gaussian Processes With Random Effects and Explanatory Variables for Degradation Data. Technometrics 2015, 57, 100–111. [Google Scholar] [CrossRef]
Figure 1. Degradation paths of the GaAs lasers.
Figure 1. Degradation paths of the GaAs lasers.
Mathematics 09 02740 g001
Figure 2. The sampling process of posterior distribution under the Jeffreys prior π J ( λ , μ ) .
Figure 2. The sampling process of posterior distribution under the Jeffreys prior π J ( λ , μ ) .
Mathematics 09 02740 g002
Figure 3. The sampling process of posterior distribution under the reference prior π R λ , μ .
Figure 3. The sampling process of posterior distribution under the reference prior π R λ , μ .
Mathematics 09 02740 g003
Figure 4. The AICs based on different parameter p .
Figure 4. The AICs based on different parameter p .
Mathematics 09 02740 g004
Table 1. Five well-known TED model.
Table 1. Five well-known TED model.
Degradation Model p V ( μ ) k ( θ ( μ ) ) θ ( μ ) MeanVariance
Wiener01 μ 2 / 2 μ μ t t / λ
Poisson1 μ μ ln ( μ ) μ t μ t / λ
Compound Poisson(1, 2) μ p μ 2 p / ( 2 p ) μ 1 p / ( 2 p ) μ t μ p t / λ
Gamma2 μ 2 ln ( μ ) 1 / μ μ t μ 2 t / λ
Inverse Gaussian3 μ 3 1 / μ 1 / ( 2 μ 2 ) μ t μ 3 t / λ
Table 2. ABs and MSEs (within bracket) of parameters based on MLE, subjective Bayesian and objective Bayeisan estimations in case I.
Table 2. ABs and MSEs (within bracket) of parameters based on MLE, subjective Bayesian and objective Bayeisan estimations in case I.
n Parameters π J ( λ , μ ) π R λ , μ MLE π s 1 ( λ , μ ) π s 2 ( λ , μ )
20 λ 0.0055(1.98 × 10−5)0.0052(1.80 × 10−5)0.0087(4.77 × 10−5)0.0069(3.59 × 10−5)0.0095(7.18 × 10−5)
μ 0.0148(1.62 × 10−4)0.0142(1.56 × 10−4)0.0161(1.59 × 10−4)0.0172(2.19 × 10−4)0.0313(5.13 × 10−4)
30 λ 0.0024(1.29 × 10−5)0.0023(1.22 × 10−5)0.0028(1.34 × 10−5)0.0031(2.27 × 10−5)0.0046(3.25 × 10−5)
μ 0.0041(1.22 × 10−5)0.0028(1.15 × 10−5)0.0048(1.28 × 10−5)0.0041(1.42 × 10−5)0.0062(1.79 × 10−5)
40 λ 0.0021(1.16 × 10−5)0.0019(1.01 × 10−5)0.0022(1.18 × 10−5)0.0028(1.41 × 10−5)0.0041(1.78 × 10−5)
μ 0.0020(1.07 × 10−5)0.0016(1.01 × 10−5)0.0035(1.26 × 10−5)0.0036(1.28 × 10−5)0.0043(1.51 × 10−5)
50 λ 0.0016(8.52 × 10−6)0.0014(8.54 × 10−6)0.0018(8.15 × 10−6)0.0022(1.01 × 10−5)0.0038(1.04 × 10−5)
μ 0.0011(5.67 × 10−6)0.0009(5.29 × 10−6)0.0010(5.65 × 10−6)0.0023(5.79 × 10−6)0.0031(6.21 × 10−6)
Table 3. ABs and MSEs (within bracket) of parameters based on MLE, subjective Bayesian and objective Bayeisan estimations in case II.
Table 3. ABs and MSEs (within bracket) of parameters based on MLE, subjective Bayesian and objective Bayeisan estimations in case II.
n ( λ , μ ) π J ( λ , μ ) π R λ , μ MLE π s 1 ( λ , μ ) π s 2 ( λ , μ )
20 λ 0.0136(4.34 × 10−4)0.0132(3.98 × 10−4)0.0144(4.01 × 10−4)0.0232(7.32 × 10−4)0.0179(5.11 × 10−4)
μ 0.0150(1.20 × 10−4)0.0128(1.21 × 10−4)0.0138(1.20 × 10−4)0.0309(3.14 × 10−4)0.0278(4.50 × 10−4)
30 λ 0.0028(6.59 × 10−5)0.0024(5.84 × 10−5)0.0029(6.70 × 10−5)0.0054(8.96 × 10−5)0.0042(7.13 × 10−5)
μ 0.0063(5.30 × 10−5)0.0056(5.75 × 10−5)0.0067(5.44 × 10−5)0.0089(1.23 × 10−4)0.0071(1.23 × 10−4)
40 λ 0.0021(4.44 × 10−5)0.0016(4.29 × 10−5)0.0020(4.41 × 10−5)0.0041(7.28 × 10−5)0.0037(5.56 × 10−5)
μ 0.0055(4.32 × 10−5)0.0051(4.24 × 10−5)0.0055(4.17 × 10−5)0.0072(6.38 × 10−5)0.0063(5.32 × 10−5)
50 λ 0.0015(2.42 × 10−5)0.0011(2.39 × 10−5)0.0014(2.41 × 10−5)0.0029(3.32 × 10−5)0.0022(2.54 × 10−5)
μ 0.0041(3.93 × 10−5)0.0040(3.68 × 10−5)0.0041(3.94 × 10−5)0.0057(4.52 × 10−5)0.0043(4.11 × 10−5)
Table 4. The WCIs and CPs (within bracket) of parameters in case I.
Table 4. The WCIs and CPs (within bracket) of parameters in case I.
n ( λ , μ ) π J ( λ , μ ) π R λ , μ ACI π s 1 ( λ , μ ) π s 2 ( λ , μ )
20 λ 0.5241(0.9206)0.5042(0.9321)0.8844(0.9898)0.6278(0.9073)0.7769(0.9875)
μ 0.5053(0.9002)0.5011(0.9018)0.5061(0.8895)0.7579(0.8872)0.9148(0.8948)
30 λ 0.4185(0.9255)0.4178(0.9379)0.7631(0.9889)0.5101(0.9158)0.6417(0.9763)
μ 0.4458(0.9218)0.4059(0.9252)0.4132(0.9102)0.7098(0.9065)0.8930(0.8972)
40 λ 0.4012(0.9326)0.4003(0.9408)0.7231(0.9769)0.4327(0.9216)0.6242(0.9692)
μ 0.4318(0.9309)0.3974(0.9318)0.3578(0.9207)0.6427(0.9211)0.7111(0.9042)
50 λ 0.3801(0.9402)0.3592(0.9529)0.6009(0.9621)0.4013(0.9371)0.6031(0.9687)
μ 0.3198(0.9416)0.3176(0.9422)0.3201(0.9318)0.6374(0.9272)0.5721(0.9151)
Table 5. The WCIs and CPs (within bracket) of parameters in case II.
Table 5. The WCIs and CPs (within bracket) of parameters in case II.
n ( λ , μ ) π J ( λ , μ ) π R λ , μ ACI π s 1 ( λ , μ ) π s 2 ( λ , μ )
20 λ 1.5762(0.9798)1.4251(0.9717)1.8872(1.0000)1.9838(0.9997)1.7527(0.9971)
μ 0.6032(0.8947)0.5982(0.9038)0.6123(0.8525)0.8872(0.8513)0.8410(0.8625)
30 λ 1.4612(0.9734)1.3332(0.9621)1.6922(0.9918)1.8767(0.9945)1.6965(0.9901)
μ 0.4912(0.9017)0.4735(0.9216)0.5000(0.8721)0.8018(0.8852)0.7157(0.8831)
40 λ 1.3493(0.9615)1.2387(0.9587)1.4288(0.9801)1.5442(0.9892)1.4723(0.9817)
μ 0.4597(0.9101)0.4338(0.9334)0.4330(0.8901)0.7643(0.8948)0.6265(0.8976)
50 λ 1.2421(0.9559)1.1356(0.9498)1.3317(0.9747)1.4947(0.9751)1.3999(0.9711)
μ 0.4186(0.9224)0.4047(0.9471)0.4173(0.9095)0.5049(0.9003)0.4892(0.9098)
Table 6. Point estimators and WCIs (within bracket) of parameters based on different methods.
Table 6. Point estimators and WCIs (within bracket) of parameters based on different methods.
Parameter λ μ
n = 15 π J ( λ , μ ) 1.3996 × 10−4(5.0451 × 10−5)2.0267 × 10−3(1.6703 × 10−4)
π R ( λ , μ ) 1.3963 × 10−4(5.0264 × 10−5)2.0583 × 10−3(1.5348 × 10−4)
M L E 1.3159 × 10−4(7.2950 × 10−5)2.0371 × 10−3(1.7420 × 10−4)
n = 10 π J ( λ , μ ) 1.8765 × 10−4(8.2399 × 10−5)2.0001 × 10−3(2.2589 × 10−4)
π R ( λ , μ ) 1.5141 × 10−4(8.1270 × 10−5)2.0427 × 10−3(2.2377 × 10−4)
M L E 8.7721 × 10−4(0.0049)1.5899 × 10−3(2.4587 × 10−4)
n = 5 π J ( λ , μ ) 1.9102 × 10−4(0.0026)2.0071 × 10−3(2.3370 × 10−4)
π R ( λ , μ ) 1.6087 × 10−4(0.0026)2.0331 × 10−3(2.3158 × 10−4)
M L E 2.1437 × 10−3(0.0119)1.3915 × 10−3(2.7683 × 10−4)
Table 7. AIC and log-likehood of different models for the GaAs laser data.
Table 7. AIC and log-likehood of different models for the GaAs laser data.
ModelTEDWienerGammaIG
p 2.8511023
λ 1.3963 × 10−46.24 × 10−30.02815.44 × 10−5
μ 2.0583 × 10−32.02 × 10−32.03 × 10−32.03 × 10−3
AIC−144.3450−85.1354−130.2228−142.2116
Log-likelihood75.172544.567767.111473.1058
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, W.; Zhang, S.; Liu, W.; Yu, Y. Objective Bayesian Estimation for Tweedie Exponential Dispersion Process. Mathematics 2021, 9, 2740. https://doi.org/10.3390/math9212740

AMA Style

Yan W, Zhang S, Liu W, Yu Y. Objective Bayesian Estimation for Tweedie Exponential Dispersion Process. Mathematics. 2021; 9(21):2740. https://doi.org/10.3390/math9212740

Chicago/Turabian Style

Yan, Weian, Shijie Zhang, Weidong Liu, and Yingxia Yu. 2021. "Objective Bayesian Estimation for Tweedie Exponential Dispersion Process" Mathematics 9, no. 21: 2740. https://doi.org/10.3390/math9212740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop