IMAGE COMPRESSION USING MULTIRATE SAMPLING

This paper deals with a novel method of image compression. The proposed algorithm targets the psychovisual redundancy in the given image to improve compression. The properties or features of the image are initially modified (while ensuring that the image quality is not deteriorated) by down sampling along the horizontal and vertical dimensions. The transform is then applied on this modified image obtaining a better PSNR after compression than on the original image. The results obtained are compared with DCT and adaptive DCT. Also, the proposed algorithm is image independent as it gives consistently better compression performance for different images.


INTRODUCTION
Memory and bandwidth are the prime constraints in image storage and transmission applications. One of the major challenges in enabling mobile multimedia data services is the need to process and wirelessly transmit a very large volume of data. While significant improvements in achievable bandwidth are expected with future wireless access technologies, improvements in battery technology will lag the rapidly growing energy requirements of future wireless data services.
After the mention of Wavelet by Haar in his Ph.D thesis in 1909, Discrete Wavelet Transform (DWT) was used in most of the image compression applications as it overcomes the disadvantages of DCT. Based on wavelet image compression, energy efficient multi-wavelet image transform is a technique developed to eliminate computation of certain high-pass coefficients of an image.
Image compression is one of the most important and successful applications of the wavelet transform. Unlike in DCT based image compression, the performance of a wavelet based image coder depends to a large degree on the choice of the wavelet. This problem is usually handled by using standard wavelets that are not specially adapted to a given image, but that are known to perform well on photographic images

DISCRETE COSINE TRANSFORM (DCT)
In the late nineties, the Discrete Cosine Transform (DCT) [1] has been used as the default image transform in majority of the visual systems as well as in video coding standards such as MPEG and JVT. It was one of the most successful transforms that decomposed data into multiple spatial frequency bands.

Definition
The forward DCT of I-D sequence of length N is most popularly defined as [2] C(u) = α(u) , for u= 0,1,2,…,N-1 … (1) The inverse DCT is similarly defined as In both equations, α(u) is defined as It can be observed from (1) that for u= 0, we get the first forward transform coefficient C(0) = which is obviously the average value of the sample sequence. This value is therefore referred to as the DC Coefficient. The rest of the transform coefficients are combinedly named as the AC Coefficients.
Since the present area of interest is image processing, it is necessary to extend the above definition to 2-D. The 2-D forward DCT can be defined as a direct extension of the 1-D transform as , for u,v = 0,1,2,…,N-1 … (4) Here, α(u) and α(v) take the same definition as in (3). Similarly the 2-D inverse DCT can be given as The basis functions of the DCT are orthogonal and exhibit a progressive increase in frequency both in the vertical and horizontal directions.

Features of DCT
Some features of DCT [1] with significance in image processing applications, are presented below.

Decorrelation Property
Energy Compaction: Energy compaction is the ability of the transform to pack the information present in the samples into as few coefficients as possible.
Orthogonality Property: The orthogonality property implies that the inverse transformation matrix of A is equal to its transpose and thus, the precomputation complexity is reduced. J u l y 2 5 , 2 0 1 3

Disadvantages of DCT
The success of DCT [3] can be largely attributed to its energy compaction capabilities. But, in spite of its various advantages, the DCT performance is affected due to certain limitations in its construction. The limitations are as under: The correlation between pixels of the neighboring blocks is neglected. It considers only spatial correlation of the pixels inside the single 2-D block It is practically impossible to completely de-correlate the blocks at their boundaries using DCT At very low bit rates, the reconstructed images are distorted due to undesirable blocking artifacts Additional time and effort must be put to correct the scaling factor.
Being fixed in nature, the DCT function cannot adapt to source data DCT has the history of being inadequate for binary images (fax or pictures of fingerprints) characterized by large periods of constant amplitude (low spatial frequencies), followed by brief periods of sharp transitions

DISCRETE WAVELET TRANSFORM (DWT)
Wavelet-based image compression is the order of the day as it enjoys several benefits. Primarily, it utilizes an unconditional basis function that decrease the size of the expansion coefficients to a negligible value as the index values increase. The wavelet expansion allows for a more precise localized isolation and description of the signal characteristics. This ensures that DWT is very much effective in image compression applications. Secondly, the inherent flexibility in choosing a wavelet gives scope to design wavelets customized to fit individual requirements.
The basis functions employed by the Wavelet Transform are called wavelets. The wavelet maybe simply defined [4] as a function with a well-defined temporal support that -wiggles‖ about the x-axis (it has exactly the same area above and below the axis). We are aware that a wave is an oscillating function of time or space and is periodic. In contrast, wavelets are localized waves and drop in amplitude to zero smoothly. So, their energy is concentrated in time or space and are suited to analysis of transient signals. While Fourier Transform and STFT use waves to analyze signals, the Wavelet Transform uses wavelets of finite energy. These waveforms are illustrated in Fig.1 Wave Wavelet

Figure 1. Demonstration of (a) wave and (b) wavelet
The wavelets are obtained from a single mother wavelet [5] by dilations and translations. The term mother implies that the functions with different region of support that are used in the transformation process are derived from one main function, or the mother wavelet. In other words, the mother wavelet is a prototype for generating the other window functions and is given by . … (6) wheres= scale factor, translation factor, accounts for energy normalization across various scales. When the scale factor s >1, then the signal is dilated and s < 1 implies that the signal is compressed. Translating into the physical context, high scale indicates that the signal is spanned in its entirety giving its non-detailed global view. This condition indicates the low frequency resolution. On the other hand, low scale utilizes short intervals of time and outputs a detailed representation of its characteristics. This may be termed as the high frequency resolution. Hence, the width of the windowing function relates to the how the signal is representedit determines whether there is good frequency resolution (frequency components close together can be separated) or good time resolution (the time at which frequency change occurs) Conditions that play key role in wavelets [4][5][6][7][8] Orthogonality -(inverts the analysis bank by its transpose) PR condition -(synthesis bank inverts the analysis bank with ‗d' delays) V condition -(pvanishing moments for smoothness) E condition -(eigenvalue condition which determines convergence to scaling function) J u l y 2 5 , 2 0 1 3

Advantages of DWT over DCT
The DWT is obviously superior [9,10] to DCT in its role in image compression. The features of DWT which stand in its favor are: Blocking artifacts are avoided since it has higher compression ratios and there is no need to divide the input coding into non-overlapping 2-D blocks.
It allows good localization both in time and spatial frequency domain.
It introduces inherent scaling and performs the transformation of the whole image.
Since it can identify more accurately the data relevant to human perception, it achieves higher compression ratio Higher flexibility: the inherent flexibility in choosing a wavelet gives scope to design wavelets customized to fit individual requirements

PROPOSED ALGORITHM
The proposed algorithm employs the concept of downsampling the image before compression with a view of changing its features adaptively i.e. the rate of sampling is specific to a given image. It is ensured that the visual quality of the image is maintained as only psychovisual redundancy is targeted by the algorithm.
The algorithm can be detailed in the following steps: The Original image is divided into blocks of fixed sizes.
Adaptive image is obtained by downsampling each block and modifying its features are suitably -Down sampling is performed adaptively in three directions -only horizontal, only vertical and simultaneously in both horizontal and vertical. - The three outputs are separately interpolated using three different methodsbilinear, bicubic and nearest neighbour. - The measurement metric MSE is applied to each of the resized bocks at the output of the interpolation stage. - The resized block giving the best MSE is selected, indexed and stored for compression -The procedure is repeated on all blocks of the image and the chosen resized blocks are concatenated to give the adaptive image.
The Input Bit Rate is fixed The Wavelet and its decomposition level are chosen -Two wavelets, bior4.4 and db4 have been used in this implementation.
Both the Original image well as the Adaptive image are compressed and encoded using the DWT and SPIHT coder to get the compressed images.
After decompression, PSNR is calculated for both the cases.
The results are displayed: Original image, Adaptive Image, Compressed-Decompressed Images, Compression Ratio, PSNR value.

RESULTS & DISCUSSION
The proposed algorithm is tested with the standard image ‗Lena', and a simple image ‗Twin Towers'. The results are presented both for statistical analysis of the performance of the different algorithms and quality of the image. J u l y 2 5 , 2 0 1 3  Adaptive DWT is performing better in terms of PSNR and CR even at higher quantization levels.
As quantization level increases PSNR decreases rapidly for both JPEG as well as DWT still retaining the PSNR, whereas PSNR given by adaptive DCT decreases gradually and is lower than that given by adaptive DWT.
Adaptive DWT is giving better PSNR for the same CR compared to adaptive DCT.
CR is much higher in adaptive DWT compared to JPEG.
The algorithm has been tested on various images (Lena, Peppers, Wheel and Own image). It is observed that the results are consistently good on all images though each image has distinct features. Hence, the algorithm is image-independent