A PREDICTIVE CODING METHOD FOR LOSSLESS COMPRESSION OF IMAGES

Images are an important part of today's digital world. However, due to the large quantity of data needed to represent modern imagery the storage of such data can be expensive. Thus, work on efficient image storage (image compression) has the potential to reduce storage costs and enable new applications. This lossless image compression has uses in medical, scientific and professional video processing applications.Compression is a process, in which given size of data is compressed to a smaller size. Storing and sending images to its original form can present a problem in terms of storage space and transmission speed. Compression is efficient for storing and transmission purpose.


Introduction
Recent past has seen a lot of contributions in the area of lossless image compression from several researchers. In literature, broadly two types of methods are present for lossless compression of the images:

Transform based Methods
Current prediction based methods are generic in nature and consists of three main steps as mentioned below: 1. Decor relation of images by subtracting predicted pixels, i.e., prediction 2. Estimation of current pixel context, which is found by measuring some of the properties of its causal neighbourhood pixels, i.e., Context Modelling 3. Context based entropy coding of prediction errors, i.e., Entropy Coding These above mentioned steps form the basic structure of the state-of-the-art lossless image coding methods.
Prediction is used to decor relate the image pixels with their neighbourhood. In the literature survey, it has been shown that prediction is worthwhile in the state-of the-art lossless image compression methods as JPEG-LS and CALIC. The literature survey has shown a number of ways for predicting the value of the current pixel based on their spatial correlation with neighbouring pixels. It is well known that if the data is decor related with the appropriate predictor, then the resultant structure of error residue will be independent and identically distributed. This error residue can be encoded optimally into a binary sequence by using any of the standard variable length coding techniques like Huffman coding or Arithmetic coding. If by any means we are able to decor relate the image pixels better than the existing prediction methods, which are used in the state-of-the-art method, then the error distribution will be more laplacian, i.e., peak at zero. This will lead to better compression performance as compared to existing state-of-the-art lossless compression methods like JPEG-LS.

Gradient Adjusted Predictor (GAP)
The GAP predictor, which is employed in CALIC, adapts itself to the gradients of horizontal and vertical edges. The GAP tries to detect how rapidly the edge is changing around pixel x and then by classifying the tendency of edge changing into sharp, normal, and weak edge. It gives different weight for the neighbourhood pixels for a linear prediction of pixel x. For a pixel belonging to a slope bin, the predictor associated with that bin is used to predict the pixel x. First CALIC defines dv as the criteria for vertical direction edge and DH as the criteria for horizontal direction edge. Dv and DH are computed as: They are based on some experiments and round to the number that has good relationship with 2, so that it is easy to construct fast hardware implementation

Modified Gradient Adjusted Predictor (MGAP)
The MGAP algorithm is expected to result in better prediction than that is obtained by using the conventional GAP. This intuition is based on the fact that the optimize set of predictors for object of the given image is expected to perform better than that can be obtained using default predictors of GAP. In other words, the main objective of MGAP is to find imagedependent representative predictor for each of the slope bins.
We propose an algorithm to modify GAP named as M-GAP as follows:-1. We make a pool of all the seven predictors of GAP.
2. For a given image, select all the pixels belonging to a particular slope bin.
3. Apply one-by-one all these seven predictors on the pixels identified in step 2 and find corresponding prediction error. In this way, we have seven set of source of J u n e 1 5 , 2 0 1 3 Prediction errors obtained due to application of each of the predictors. 4. Sources obtained in step 3. The entropy can be obtained using the following equation as: where, pi is the probability of the fifth symbol and n is the number of symbols in the source.
5. The predictor that results into minimum entropy are associated with the bin will be selected as a predictor for that particular bin.
6. Repeat the steps from 1 to 5 for all the seven slope bins. The above process results into the optimal set of predictors, obtained from the set of GAP predictors, for the given image.

Experimental Results
In this work, the performance of the proposed method is studied on different class of images like PCB, texture, aerial, finger print, cartoon and medical images. Its performance is compared against the MED and GAP method. From the results shown in the tables we can say that the proposed method performs consistently better than GAP and MED.
In order to test our algorithm, we have used different class of images like PCB images, aerial, texture, finger print, and cartoon images. All the images are gray scale and are of different size. The images which are used for experimental results are randomly selected from the database.

Conclusions
This chapter describes the predictive lossless image compression process. To reduce the statistical redundancy between the image pixels, we have proposed a new algorithm. This algorithm selects a candidate predictor for each of the slope bin which gives minimum entropy as compared to other predictors for that particular slope bin. The proposed algorithm gives better compression performance compared to the state-of the-art methods like GAP and MED. We have observed that if there will be more number of predictors, and we have to select the candidate predictor among them, then it will give better results.