Exponential two step approach for Time Domain based Software Process Control

Software Reliability Growth Model is a mathematical model of how the software reliability improves as faults are detected and repaired. In this paper we propose a control mechanism based on the cumulative quantity between observations of time domain failure data using mean value function of Goel-Okumoto model, which is based on Non Homogenous Poisson Process. The model parameters are estimated by a two step approach. Software reliability process can be monitored efficiently by using Statistical Process Control. Control charts are widely used for process monitoring. It assists the software development team to identify failures and actions to be taken during software failure process and hence, assures better software reliability.


INTRODUCTION
Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment (Musa et al., 1987;Lyu, 1996). Among all SRGMs developed so far, a large family of stochastic reliability models based on a non-homogeneous Poisson Process known as NHPP reliability models, has been widely used. Some of them depict exponential growth while others show S-shaped growth depending on nature of growth phenomenon during testing. The success of mathematical modeling approach to reliability evaluation depends heavily upon quality of failure data collected.
Software reliability assessment is important to evaluate and predict the reliability and performance of software system, since it is the main attribute of software. To identify and eliminate human errors in software development process and also to improve software reliability, the Statistical Process Control concepts and methods are the best choice. They are used to monitor the performance of a software process over time in order to verify that the process remains in the state of statistical control. It helps in finding assignable causes, long term improvements in the software process. Software quality and reliability can be achieved by eliminating the causes or improving the software process or its operating procedures (Kimura et al., 1995 ).
The most popular technique for maintaining process control is control charting. The control chart is one of the seven tools for quality control. Software process control is used to secure the quality of the final product which will conform to predefined standards. In any process, regardless of how carefully it is maintained, a certain amount of natural variability will always exist. A process is said to be statistically "in-control" when it operates with only chance causes of variation. On the other hand, when assignable causes are present, then we say that the process is statistically "out-of-control." The control charts can be classified into several categories, as per several distinct criteria. Control charts should be capable of creating an alarm when a shift in the level of one or more parameters of the underlying distribution or a nonrandom behavior occurs. Normally, such a situation will be reflected in the control chart by points plotted outside the control limits or by the presence of specific patterns. The most common non-random patterns are cycles, trends, mixtures and stratification (Koutras et al., 2007). For a process to be in control the control chart should not have any trend or nonrandom pattern.
SPC is a powerful tool to optimize the amount of information needed for use in making management decisions. Statistical techniques provide an understanding of the business baselines, insights for process improvements, communication of value and results of processes, and active and visible involvement. SPC provides real time analysis to establish controllable process baselines; learn, set, and dynamically improve process capabilities; and focus business areas which need improvement. An early detection of software failures will improve the software reliability. The selection of proper SPC charts is essential to effective statistical process control implementation and use. The SPC chart selection is based on data, situation and need (MacGregor and Kourti., 1995). Many factors influence the process, resulting in variability. The causes of process variability can be broadly classified into two categories, viz., assignable causes and chance causes.
The control limits can then be utilized to monitor the failure times of components. After each failure, the time can be plotted on the chart. If the plotted point falls between the calculated control limits, it indicates that the process is in the state of statistical control and no action is warranted. If the point falls above the UCL, it indicates that the process average, or the failure occurrence rate, may have decreased which results in an increase in the time between failures. This is an important indication of possible process improvement. If this happens, the management should look for possible causes for this improvement and if the causes are discovered then action should be taken to maintain them. If the plotted point falls below the LCL, it indicates that the process average, or the failure occurrence rate, may have increased which results in a decrease in the failure time. This means that process may have deteriorated and thus actions should be taken to identify and the causes may be removed. It can be noted here that the parameters a, b should normally be estimated with the data from the failure process. We followed a two step approach in estimating the parameters.
The control limits for the chart are defined in such a manner that the process is considered to be out of control when the time to observe exactly one failure is less than LCL or greater than UCL. Our aim is to monitor the failure process and detect any change of the intensity parameter. When the process is normal, there is a chance for this to happen and it is commonly known as false alarm. The traditional false alarm probability is set to be 0.27% although any other false alarm probability can be used. The actual acceptable false alarm probability should in fact depend on the actual product or process (Gokhale and Trivedi, 1998).

NHPP SRGM
The Non-Homogenous Poisson Process (NHPP) based software reliability growth models (SRGMs) have proved to be quite successful in practical software reliability engineering (Musa et al., 1987). The main issue in the NHPP model is to determine an appropriate mean value function to denote the expected number of failures experienced up to a certain time point. Model parameters can be estimated by using two step approach (i.e one parameter is estimated through Maximum Likelihood Estimate (MLE) and another parameter is estimated through Least Square Estimation (LSE)). Various NHPP SRGMs have been proposed upon various assumptions. Many of the SRGMs assume that each time a failure occurs, the fault that caused it can be immediately removed and no new faults are introduced, which is usually called perfect debugging. Imperfect debugging models have proposed a relaxation of the above assumption (Ohba, 1984;Pham, 1993). J u n e 2 0 , 2 0 1 3 If "t" is a continuous random variable with probability density function: unknown constant parameters which need to be estimated, and cumulative distribution function: Let "a" denote the expected number of faults that would be detected given infinite testing time in case of finite failure NHPP models and "b" represents the fault detection rate. In software reliability, the initial number of faults and the fault detection rate are always unknown. Then, the mean value function of the finite failure NHPP models can be written as: ( ) ( ) m t aF t  , representing the expected number of software failures by time "t". The failure intensity function () t  in case of the finite failure NHPP models is given by , which is proportional to the residual fault content (Pham, 2006).

 
Nt be the cumulative number of software failures by time "t". A non-negative integer-valued stochastic process   Nt is called a counting process, if   Nt represents the total number of occurrences of an event in the time interval [0, t] and satisfies these two properties: One of the most important counting processes is the Poisson process. A counting process,

 
Nt , is said to be a Poisson process with intensity  if The initial condition is N(0) = 0 The failure process, N(t), has independent increments The number of failures in any time interval of length s has a Poisson distribution with mean  s, that is, Describing uncertainty about an infinite collection of random variables one for each value of "t" is called a stochastic counting process denoted by

 
Nt is said to be an NHPP model.

Model description: G-O Model
One simple class of finite failure NHPP model is the Goel and Okumoto model (Goel and Okumoto, 1979), which has an exponential growth of the cumulative number of failures experienced. It is an NHPP based SRGM, assuming that the failure intensity is proportional to the number of faults remaining in the software describing an exponential failure curve. It has two parameters. Where, "a" is the expected total number of faults in the code and "b" is the shape factor defined as, the rate at which the failure rate decreases. The cumulative distribution function of the model is: The main issue in the NHPP model is to determine an appropriate mean value function to denote the expected number of failures experienced up to a certain time point. Method of least squares (LSE) or maximum likelihood (MLE) has been suggested and widely used for estimation of parameters of mathematical models (Kapur et al., 2008). Non-linear regression is a method of finding a nonlinear model of the relationship between the dependent variable and a set of independent variables. Unlike traditional linear regression, which is restricted to estimating linear models, nonlinear regression can estimate models with arbitrary relationships between independent and dependent variables. The model proposed in this paper is non-linear and it is difficult to find solution for nonlinear models using simple Least Square method. Therefore, the model has been transformed from non linear to linear.
The least squares method is widely used to estimate the numerical values of the parameters to fit a function to a set of data. We will use the method in the context of a Linear regression problem. It exists with several variations. Its simpler version is called Ordinary Least Squares (OLS) and more sophisticated version is called Weighted Least Squares (WLS) (Lewis-Beck, 2003).

TWO STEP APPROACH FOR PARAMETER ESTIMATION
MLE and LSE techniques are used to estimate the model parameters (Lyu, 1996;Musa et al., 1987). Sometimes, the likelihood equations are difficult to solve explicitly. In such cases, the parameters are estimated with some numerical methods (Newton Raphson method). On the other hand, LSE, like MLE, can be applied for small sample sizes and may provide better estimates (Huang and Kuo, 2002). The remaining parameters are estimated through LSE regression approach.

ML (Maximum Likelihood) Parameter Estimation
The idea behind maximum likelihood parameter estimation is to determine the parameters that maximize the probability of the sample data. The method of maximum likelihood is considered to be more robust and yields estimators with good statistical properties. In other words, MLE methods are versatile and apply to many models and to different types of data. Although the methodology for MLE is simple, the implementation is mathematically intense. Using today's computer power, however, mathematical complexity is not a big obstacle. If we conduct an experiment and obtain N independent observations, 12 , , , N t t t  , the likelihood function (Pham, 2003)  , which is substituted in finding "a".

LS (Least Square) parameter estimation
LSE is a popular technique and widely used in many fields for function fit and parameter estimation (Liu, 2011). The least squares method finds values of the parameters such that the sum of the squares of the difference between the fitting function and the experimental data is minimized. Least squares linear regression is a method for predicting the value of a dependent variable Y, based on the value of an independent variable X. o

The Least Squares Regression Line
Linear regression finds the straight line, called the least squares regression line that best represents observations in a bivariate data set. Given a random sample of observations, the population regression line is estimated by: ŷ bx a  .
Where, "a" is a constant, "b" is the regression coefficient and "x" is the value of the independent variable, and "ŷ " is the predicted value of the dependent variable. The least square method defines the estimate of these parameters as the values which minimize the sum of the squares between the measurements and the model. Which amounts to minimizing the expression: (Xie, 2001).
Taking the derivative of E with respect to "a" and "b" and setting them to zero gives the following set of equations (called the normal equations): The Least Square Estimates of "a" and "b" are obtained by solving the above equations. Where, a Y bX

ML Estimation
Procedure to find parameter 'a' using MLE.
The likelihood function of G-O model is given as,

LS Estimation
Procedure to find parameter 'b' using regression approach.
The cumulative distribution function of G-O model is, The parameter D is estimated as, D Y X   and therefore, 1  " is nothing but the parameter "b" estimated through regression approach.

DISTRIBUTION OF TIME BETWEEN FAILURES
Based on the inter failure data given in Table 1, we compute the software failures process through failure Control chart. We used cumulative time between failures data for software reliability monitoring using G-O model. The use of cumulative quality is a different and new approach, which is of particular advantage in reliability. "  a " and "  b " are estimates of parameters and the values can be computed using iterative method for the given cumulative time between failures data (Xie, 2002) shown in table 1. Using "a" and "b" values we can compute () mt . Assuming an acceptable probability of false alarm of 0.27%, the control limits can be obtained as (Xie, 2002):

CONCLUSION
The given 30 inter failure times are plotted through the estimated mean value function against the failure serial order. The parameter estimation is carried out by two step approach for the considered model. The graphs have shown out of control signals i.e below the LCL. Hence we conclude that our method of estimation and the control chart are giving a +ve recommendation for their use in finding out preferable control process or desirable out of control signal. By observing the Failure Control chart we identified that the failure situation is detected at 9th point of table-4 for the corresponding () mt , which is below () L mt and then continued to fail. It indicates that the failure process is detected at an early stage compared with Xie et. a1 (2002) control chart, which detects the failure at 23rd point for the inter failure data above the UCL. Hence our proposed Failure control chart detects out of control situation at an earlier than the situation in the time control chart. The early detection of software failure will improve the software Reliability. When the time between failures is less than LCL, it is likely that there are assignable causes leading to significant process deterioration and it should be investigated. On the other hand, when the time between failures has exceeded the UCL, there are probably reasons that have lead to significant improvement.
From Figure.2 the process is stabilized by touching the X-axis. Where as in Figure.1 there is a possibility of upward average number of failures also. As SPC is to stabilize at some point of time the two-step approach in Figure.2 is preferable.