THE GENERALIZED EXPONENTIAL DISTRIBUTION AS A LIFETIME MODEL UNDER DIFFERENT LOSS FUNCTIONS

Bayes estimates of the unknown parameter and the reliability function for the generalized exponential model are derived. Bayes estimates are obtained under various losses such as the squared error, the absolute error, the squared log error, and the entropy loss functions. Monte Carlo simulations are presented to compare the Bayes estimates and the maximum likelihood estimates of the unknown parameter and the reliability function.


INTRODUCTION
Gamma and Weibull distributions are the most popular distributions for life data analysis.One of the major disadvantages of the gamma distribution is that its distribution and survival functions cannot be expressed in a closed form if the shape parameter is not an integer.Moreover, there are terms involving the incomplete gamma function, and thus, the distribution, survival, or hazard functions need to be obtained by numerical integration.This makes the gamma distribution unpopular compared to a Weibull distribution, which has a nice closed form for hazard and survival functions.On the other hand, the Weibull distribution has its own disadvantages.For example, Bain and Engelhardt (1991) have pointed out that the maximum likelihood estimators of a Weibull distribution might not behave properly for all parameter ranges.
Generalized exponential (GE) distribution has been used as an alternative to gamma and Weibull distributions in many situations by Gupta and Kundu (1999).Here, θ is the shape parameter.When θ = 1, the above distribution corresponds to the exponential distribution.It is important to mention here that when θ is a positive integer, the GE cdf is the cdf of the maximum of a random sample of size θ from the standard exponential distribution.Due to the simple structure of its distribution function, the GE distribution can be used quite effectively in analyzing any lifetime data, especially in the presence of censoring or if the data is grouped.
The reliability function of the distribution, at some t is given by .
In studying failure models, from the Bayesian point of view, the parameter is considered as a random variable having a specified prior distribution.After a sample has been observed, several estimates can be derived from the posterior distribution.The loss function is used to represent a penalty associated with each of the possible estimates.Traditionally, most authors use the simple squared error loss function and obtain the posterior mean as the Bayesian estimate.However, in practice, the real loss function is often not symmetric.For example, Feynman (1987) remarks that in the space shuttle disaster, the management may have overestimated the average life or reliability of solid fuel rocket boosters.The consequences of overestimates, in loss of human life, are much more serious than the consequences of underestimates.In this case, an asymmetric loss function might have been more appropriate.In recent years, many authors have considered asymmetric loss functions in reliability, such as Basu and Ebrahimi (1991), Moore and Papadopoulos (2000), Soliman (2005), and Ren et al. (2006).
The purpose of this paper is to study the GE model from the Bayesian point of view under various losses, such as the squared error, the absolute error, the squared log error loss, and the entropy loss functions.In Section 2, the loss functions and the priors are described.In Section 3, we obtain Bayes estimates of θ and the reliability function under each of these loss functions.We choose a conjugate prior for θ, which includes the Jeffreys prior as a special case.In Section 4, we use the Kolmogorov-Smirnov (KS) goodness of fit test to test the hypothesis that our estimates for θ, under the different approaches, are a good approximation for the true value of θ.Finally, in Section 5, the different Bayes estimates are compared with the maximum likelihood estimates (MLEs) via Monte Carlo simulations, and some concluding remarks are provided.

LOSS FUNCTIONS AND PRIORS
To estimate an unknown parameter θ and reliability function, four loss functions are considered.The first is the common squared error loss given by θ θ θ π (8)

BAYES ESTIMATION
In order to estimate θ and the reliability function, a life test is conducted on n items, and as each failure occurs, the time is recorded.Suppose that x = (x 1, x 2, …, x n ) is the observed lifetimes of the n items.When all n items have failed, the test ends.The likelihood function, is given by , ) 1 ( ) , ( ). 1 ln( ) ( From ( 9), the maximum likelihood estimates (MLEs) of θ and R(t) are given, respectively, by Now, we want to find Bayes estimates under loss functions L i ; i=1,2,3,4.Under the conjugate prior ( 7), the posterior distribution for θ is θ ~ Gamma (n + α , β + s(x)).Clearly, the Jeffreys prior ( 8) is a special case of ( 7) when α = β = 0, under which the posterior distribution of θ is Gamma (n, s(x)).

Estimation of θ
For the squared error loss L 1 , under prior (7), the Bayes estimate of θ is the posterior mean, .) ( It is easy to see that the MLE of θ is the same as θ １ under the Jeffreys prior (8).The Bayes estimate of θ under the absolute error loss L 2 and prior ( 7) is the median of the posterior distribution.Since the posterior distribution is gamma, the Bayes estimate of θ is is the median of a chi-square pdf with 2(n + α) degrees of freedom, provided 2(n + α) is an integer.
For the squared log error loss L 3 and prior (8), the Bayes estimate of The Bayes estimate of θ under the entropy loss L 4 and prior ( 8

Estimation of ) (t R
Analogous to the four loss functions in estimating θ, we may now use the four loss functions for estimating the Data Science Journal, Volume 8, 24 October 2009 For the squared error loss 1 L , the Bayes estimate for the reliability function R(t) under the prior (8), is given by Finally, the Bayes estimate of ) (t R with the gamma prior Gamma (α , β) under the entropy loss 4 L is given by .

KOLMOGOROV-SMIRNOV GOODNESS OF FIT TEST
In this section, the Kolmogorov-Smirnov (KS) goodness of fit test is used to check the validity of the hypothesis that the samples are from a GE distribution.This test examines a random sample θ is given by one of the equations ( 10), ( 12), ( 13), ( 14), or (15).
Suppose that we would like to test hypotheses ) ( ) ( : which is in fact the maximum difference between the empirical distribution ) (x F n and the hypothesized parameterized distribution ) (x F * .The null hypothesis 0 H is rejected, at the level of significance γ if the test statistic n D exceeds the γ − 1 quantile given by the appropriate table.We use the KS goodness of fit test to test the hypothesis that our different estimates for θ, under the different approaches, are a good approximation to the true value of θ.

SIMULATION AND CONCLUDING REMARKS
In this section, a Monte Carlo simulation study was carried out to compare the performance of the different Bayes estimates and the MLEs.The mean square error (MSE) was used to compare the estimates.
The ML and Bayes estimates were compared based on a Monte Carlo simulation as follows: 1.For given values of α and β, we generated θ from the gamma prior pdf (7).

2.
Using the value of θ from step 1, we generated a sample of size n from the GE pdf in (1).
3. Different estimates for the parameter θ and the reliability function were computed from the generated sample.Furthermore, the KS goodness of fit statistic was calculated for the estimates of θ and the decision to reject or not reject the null hypothesis was noted.4. Steps 1-3 were repeated 5000 times, and average values and MSEs of the estimates were computed over the 5000 repetitions.
Tables 1-6 give the number of times that under the Kolmogorov criterion we did not reject the null hypothesis that the data were sampled from a GE distribution with parameters θ, γ = 0.05.Below this, the generated values of θ (as the true values of θ), the average value of the estimates for θ, and the mean square error are given.Tables 1-3 give the results for α = 3 and β = 2. Tables 4-6 give the results for α = 0.5 and β = 1.5.We considered n = 10, 20, and 50.Note that GP stands for gamma prior and IP for improper prior.The last part of the table gives the estimates of the reliability and the associated mean square error for varying times.From the results, we observe the following.
(1) From the KS goodness of fit test, we can conclude that the Bayesian and maximum likelihood estimators (MLEs) for θ and R(t) lead to good approximations for the parameters and reliability functions for the GE distribution.
(2) Tables 1-6 show that the Bayes estimates are better than the MLEs (in the sense of MSEs).Also, the MSEs decrease as n increases.
(3) From Tables 1-3, for α = 3 and β = 2, we observe that the gamma prior overestimates the parameter θ, under the squared error, the absolute error, and the squared log error loss functions but underestimates the same parameter under the entropy loss function.It is also observed that the improper prior overestimates the parameter θ, under most of the loss functions considered.From Tables 4-6, for α = 0.5 and β = 1.5, we observe that the gamma prior underestimates the parameter θ under the different loss functions, but the improper prior overestimates the same parameter.(4) In general, for α = 3 and β = 2, the Bayes estimators under the gamma prior have smaller MSEs than the improper Bayes, which have less than or equal MSEs to the MLE.We also observe that the entropy loss function does give the smallest MSEs in this case.For α = 0.5 and β = 1.5, the proper and improper Bayes estimators have smaller MSEs than the MLE.In this case, the squared error loss function does give the smallest MSEs.(5) We observe that the Bayes estimates of θ under the squared error loss function and improper prior reduce to the MLEs.
The probability density function (pdf) and cumulative distribution function (cdf) of the one-parameter GE distribution are given, respectively, to test the null hypothesis that the unknown distribution is in fact a specified function,