Near-lossless compression algorithm for Bayer pattern color filter arrays

Share Embed


Descripción

Near-lossless compression algorithm for Bayer pattern color filter arrays Andriy Bazhyna, Atanas Gotchev, Karen Egiazarian Institute of Signal Processing, Tampere University of Technology P. O. Box 553 FIN-33101, Tampere, Finland [email protected] ABSTRACT In this contribution, we propose a near-lossless compression algorithm for Color Filter Arrays (CFA) images. It allows higher compression ratio than any strictly lossless algorithm for the price of some small and controllable error. In our approach a structural transformation is applied first in order to pack the pixels of the same color in a structure appropriate for the subsequent compression algorithm. The transformed data is compressed by a modified version of the JPEG-LS algorithm. A nonlinear and adaptive error quantization function is embedded in the JPEG-LS algorithm after the fixed and context adaptive predictors. It is step-like and adapts to the base signal level in such a manner that higher error values are allowed for lighter parts with no visual quality loss. These higher error values are then suppressed by gamma correction applied during the image reconstruction stage. The algorithm can be adjusted for arbitrary pixel resolution, gamma value and allowable error range. The compression performance of the proposed algorithm has been tested for real CFA raw data. The results are presented in terms of compression ratio versus reconstruction error and the visual quality of the reconstructed images is demonstrated as well. Keywords: Bayer pattern, color filter array, near-lossless compression, JPEG-LS

1. INTRODUCTION Digital cameras capture image data using CCD or CMOS sensors. Color information is registered by separate sensors for the three basic colors: red, green and blue or their complementary colors cyan, magenta and yellow. However, due to hardware constrains, most of commercially available digital cameras capture color information by single light sensitive sensor with CFA. In CFA, only one basic color is captured per pixel position. The remaining two basic colors need to be reconstructed later on by digital image processing algorithms for CFA interpolation. Among various CFA masks the most popular one is the so-called Bayer pattern (Figure 1)1. G

R

G

R

G

R

B

G

B

G

B

G

G

R

G

R

G

R

B

G

B

G

B

G

G

R

G

R

G

R

B

G

B

G

B

G

Figure 1. Bayer CFA pattern.

The reconstructed image is obtained by passing captured data through an image formation pipeline, which includes the following operations: color interpolation and demosaicing, color correction, denoising and enhancement, gamma correction. Finally, the reconstructed image is lossy or lossless compressed for storage and transmission. Some of these modules are quite resource consuming and their embedding in a small, low-power camera is rather problematic. Thus, the quality of the resulted images may be compromised due to the hardware constrains. Therefore, in many imaging

198

Digital Photography, edited by Nitin Sampat, Jeffrey M. DiCarlo, Ricardo J. Motta, Proceedings of SPIE-IS&T Electronic Imaging, SPIE Vol. 5678 © 2005 SPIE and IS&T · 0277-786X/05/$15

applications such as medical, astronomical and professional photography saving unreconstructed raw CFA data (‘digital negatives’) is preferable in order to be able to reconstruct full color images by comprehensive algorithms working on powerful computers and to control the whole image reconstruction process. By reducing the camera operations to just raw data compression, such an approach would also speed up the processing, avoid generating data redundancy by the color interpolation and prolong the battery life. Raw data are made accessible for different end devices, which are free to use reconstruction methods adequate to their computational, power and displaying capabilities. As a result, better flexibility and scalability of the overall system is expected. The above described methodology raises the problem of an efficient CFA compression method. While for conventional photography and mobile device cameras lossy compression may be preferable, for medical, astronomical and professional photography lossless or near-lossless compression might be required. The near-lossless compression refers to a lossy compression algorithm for which each reconstructed image sample differs from the corresponding original image sample by not more than a pre-specified small value. Typically this error value varies as much as of 3 units for 256 intensity levels. The near-lossless compression algorithm may be a good alternative for high-quality imaging applications. It provides significantly higher compression ratio than any strictly lossless algorithm for the price of some small and controllable error. In most cases, no visual difference with the original image is encountered. The problem of lossy compression of Bayer CFA data has been addressed2, 3, 4, however, the problem of near-lossless compression has not been well-scrutinized yet. There are issues related to the specifics of the Bayer CFA pattern and implications caused by the gamma correction that need to be addressed. In this contribution, we propose a near-lossless compression algorithm for Color Filter Arrays (CFA) images. In our approach a structural transformation is applied first in order to pack the pixels of the same color in a structure appropriate for the subsequent compression algorithm. The transformed data is compressed with a modified version of the JPEG-LS algorithm. Our modification of JPEG-LS takes into account the gamma correction operation applied to the raw data. A nonlinear and adaptive error quantization function is embedded in the JPEG-LS algorithm after the fixed and context adaptive predictors. It is step-like and adapts to the base signal level in such a manner that higher error values are allowed for lighter parts with no visual quality loss. These higher error values are then suppressed by gamma correction applied during the image reconstruction stage. The rest of the paper is organized as follows. In the next chapter the structural transformation of CFA data is presented. The effect of gamma correction of CFA data is demonstrated in Section 3. In Section 4, the nonlinear and adaptive error quantization function is presented. The integration of proposed error quantization function into the JPEG-LS algorithm is described in Section 5. The result of compression performance of the proposed algorithm tested for real CFA raw data is presented in Section 6. Finally, the conclusions are given.

2. STRUCTURE CONVERSION OF CFA DATA Any compression algorithm applied directly to the Bayer CFA would be rather inefficient. The reason is that CFA image is a combination of pixels from three different color planes. Although these color planes are highly correlated, the pixels from different planes will have rather different intensity levels due to the nonuniform spectrum of the light source and the nonuniform light spectrum sensitivity of the camera sensor. If pixels from such color planes are mixed together into a single intensity image, they create high frequencies being not susceptible to high compression. A structural transformation is needed in order to pack the pixels of the same color in a structure appropriate for the subsequent compression algorithm. For the red and blue pixels this transformation is straightforward. As shown in Figure 2 they could be directly downsampled into compact rectangular form. For the green pixels, located on quincunx grid, there are several possibilities. We have studied different methods for green pixels separation and compared their peculiarities in the light of the chosen prediction-based compression algorithm, i.e. JPEG-LS. We found, that the best structure results from a separation into two rectangular arrays: one for pixels in odd rows odd column and other for pixels in even row even column (see Figure 2).

SPIE-IS&T/ Vol. 5678

199

R2

R1

R2 R1 R4 R3

R4

R3

G1

G2

G1 G2 G5 G6

G B G B

R G R G

G B G B

R G R G

G1

G5

G2 G3

G5

G6

G4

COMPRESSION ALGORITHM

G6 G7

G8

G3

G4 G3 G4

G7

B2

G8

G7 G8

B4 B2 B4

B1

B3

B1 B3

Figure 2 Separation of Bayer CFA data into four decimated planes and compact representation of those planes.

3. GAMMA CORRECTION OF IMAGE DATA For most displaying devices (e.g. cathode-ray tube - CRT) the dependence between the emitted light intensity and the input voltage is nonlinear and can be expressed by the following exponential function: luminance = voltageγ ,

where γ varies between 2.35 and 2.55 for a properly adjusted conventional CRT monitor6. Gamma correction is the process of compensation for this nonlinearity in order to achieve a correct intensity reproduction, i.e. 1

γ

output = input .

In most still image and video capturing systems, the gamma correction is done only once, i.e. in capturing device, in order to avoid repetitive processing while displaying the same image many times. However, this solution causes a problem in case of transferring images from one platform to another. Suns and PCs store image data in JPEG7, and BMP8 image format with a pre-correction of 1/2.5. For images stored in linear PNG format9 the correction is done by displaying software. Macintosh computers have built-in hardware correction of 1/1.4, while for Silicon Graphics (SG) computers it is of 1/1.710. Thus, for Macintosh and SG the image data should be pre-corrected only by factor of 1/1.8 and 1/1.5 correspondingly. Moreover, other displaying devices such as liquid crystal displays (LCD) have a nonlinearity different than this of CRT. The conversion from one system format to another is feasible for many ordinary devices, however for those demanding higher quality, it might be preferable to avoid it. That is, to store non-corrected image data and apply the gamma correction as a post-processing step.

200

SPIE-IS&T/ Vol. 5678

The gamma correction is used not only for compensating nonlinearity of displaying device. It is also used to improve the perceptual coding efficiency. Let us consider a linear light representation with 8 bits. Zero code represents the black color and code 255 represents the white color. Typically, the human eye resolves two different intensities in ideal conditions if the difference between them is not less than 1 percent. Thus, eye is more sensitive to difference between two intensities, when the intensity level is small, than when the intensity level is high since the relative difference between two neighboring codes is higher for the lower code values. For example, if code 20 changes to code 19 or 21, due to coding error or noise, the relative change is about 5 percents, which is noticeable for most observers. Such errors are especially detectable on large shades areas and they are called contouring or banding. On the other hand, if code 230 changes to 231, the relative difference is less than 0.5 percent, which is visually indistinguishable. Thus, code 231 does not carry perceptually valuable information and can be rounded to 230. The gamma correction allows readjusting codes in such a way that lower luminance levels are quantized more accurately and high luminance values are quantized more roughly (Figure 3). Hence, light representation by codes after gamma correction becomes nonlinear. However, it becomes linear from human visual system point of view, that is, linear increasing of code number will linearly increase lightness sensation. Thus, while coding corrected data one may allow encoding error to be independent from base level. However, it is not acceptable the encoding error to be independent from base level for non-corrected CFA data.

Figure 3. Readjusting codes by gamma correction. Higher adjustment range for lower intensity levels (10-20 extended to range 5678) and lower adjustment range for high intensity levels (230-240 suppressed to range 242-247).

The above thoughts are illustrated by the following real life example. A picture with a linearly varying intensity (Figure 4a) is captured by a digital camera with no gamma correction (Figure 4b). This data is structurally transformed as described above and compressed by the JPEG-LS algorithm in near-lossless mode with a maximum allowable encoding error of 10 levels. While it is a rather high error for the near-lossless mode, it was chosen to make it clearly visible on a printed document. Figure 4c shows the decompressed image after all image reconstruction procedures. It is seen that the step-like distortions are clearly visible on left (dark) and middle (gray) part of the image as they are magnified by the gamma correction. There are almost no visible distortions on the right (light) part of the image as they are attenuated by the gamma correction. Then, the gamma correction was applied to the captured data and the result was compressed in the same way as before. Figure 4d shows the decompressed image after all image reconstruction procedures except gamma correction, which was applied before the compression. It is seen, that now the step-like distortions are equally distributed among whole image and are independent from the luminance level. This example illustrates that non-corrected CFA data is not suitable for lossy compression by methods where the allowable encoding error is uniform for all intensity levels. A solution is to make this encoding error to be function of the luminance level.

SPIE-IS&T/ Vol. 5678

201

Figure 4. Quantization effect without and with gamma correction. A) original picture with linearly varying intensity; B) linear raw data captured by digital camera without gamma correction; C) decompressed and reconstructed data after compression of noncorrected data with JPEG-LS and near-lossless error of 10 levels; D) decompressed and reconstructed data after compression of gamma corrected data with JPEG-LS and near-lossless error of 10 levels.

4. ADAPTIVE ERROR QUANTIZATION FUNCTION We suggest a straightforward approach for finding the allowable encoding error for different intensity levels. It is described as a pseudo C code in Table 1. Table 1. The pseudo C code for adaptive error calculation function. for (x=0; x < maxVal; x++) { for (m=1; m < deviationRange; m++) { a=round( maxVal*( ((x-m)/maxVal)^oneOverGamma) ); b=round( maxVal*( ((x+m)/maxVal)^oneOverGamma) ); c=round( maxVal*( (x/maxVal)^oneOverGamma) ); if {

(b > c + maxError) | (a < c - maxError) threshold(x)=m-1; break;

} } }

202

SPIE-IS&T/ Vol. 5678

For a luminance value x in the range between 0 and maxVal the program takes an ‘etalon’ output value c and two values a and b deviated by m, all three being corrected with the reciprocal of the desired gamma. If either of the values a and b deviates from the ‘etalon’ value by more than the allowable reconstruction error, the allowable encoding error for x is made equal to m – 1 and stored in threshold. In Table 1, maxVal equals 2P – 1, where P is the precision (typically 8 to 16 bits per pixel (bpp). The maxError is the maximum allowable error for reconstructed image. The deviationRange is the maximum range for deviation parameter m. It should be much higher than maxError and much lesser than maxVal. Figure 5 depicts a threshold function obtained for allowable reconstruction error equal to one, gamma equal to 2.2 and 10 bpp precision (maxVal = 1023). It shows that for intensity levels below approximately 120 only zero error may be allowed since a unity encoding error would cause a reconstruction error higher than unity. Within the range between 120 and 220 the allowed error oscillate between zero and unity, and gets stabilized within the range between 220 and 425. The same behavior continues for transition from unity error to error equal to two and so on. For high intensity levels higher encoding errors ( = 2 or 3) cause reconstruction error no higher than unity. In order to avoid tricky regions where the allowable encoding error is not stable we bound the threshold function (the thick dashed line in Figure 5). Consequently, zero encoding error is assumed for intensity levels below 220, then unit error is allowed between 221 and 810, while for intensities higher than 810 it equals 2.

Figure 5. Allowable encoding error produced with source code from Table 1 (thin solid line); the same function restricted from bottom (thick dashed line).

For 8 bpp precision (256 intensity levels) the typical tolerable reconstruction error ranges between 1 and 3. An error equal to 3 out of 256 levels accounts for 1.17 percent. This is around the visibility threshold of 1%. Therefore, reconstruction errors higher than 3 are considered unacceptable since the coding distortions might become visible for such errors. Higher camera precision (10 or 12 bpp) allows increasing the tolerable reconstruction error. For example, for 10 bpp precision the encoding errors of 4, 8 and 12 correspond to 1, 2 and 3 error values for 8 bpp precision. (12 is 1.17 percent of 1024). Figure 6 presents error threshold functions for different combinations of tolerated reconstruction error, gamma and precision. For given gamma factor and bpp precision, and for desired (tolerated) reconstruction error, not exceeding some predefined value for all intensity levels, the allowable encoding error can be determined by the approach suggested above. Other encoding error functions, specific for some applications, and more accurately modeling the human visual system, can be used as well11. In the next section we demonstrate how the encoding error function may be embedded in a real coder, i.e. JPEG-LS.

SPIE-IS&T/ Vol. 5678

203

Figure 6. Allowable encoding error as a function of luminance level. A) gamma factor of 2.2, 10 bpp precision and different tolerated reconstruction errors; B) 10 bpp precision, four levels of tolerated error and different gamma factors; C) gamma factor of 2.2, reconstruction error equivalent to unit error for 8 bpp precision and different bpp precisions.

5. MODIFIED JPEG-LS ALGORITHM From realization point of view a near-lossless compression is usually based on a lossless algorithm. That is, the lossless algorithm provides near-lossless mode, with no significant increase of computational and memory requirements. Since

204

SPIE-IS&T/ Vol. 5678

the compression ratio achieved with near-lossless compression is not very high, the algorithm should be reasonably simple and not computationally expensive. In our opinion the JPEG-LS12 is a good candidate. It yields compression ratio of about 3 percents worst than one of the best lossless compression algorithms i.e. CALIC13, while being significantly faster. Our modification of JPEG-LS algorithm is based on the JPEG-LS source code written at Signal Processing and Multimedia Group of University of British Columbia14. The block diagram of the modified JPEG-LS encoder is presented in Figure 7. The additional blocks and connections, comparing to the original encoder12, are shown by dashed lines. context mod. pred. error pred. error

-

Adaptive error quantization

Context modeler

mod. pred. error, code spec.

Fixed predictor

+

Golomb coder

predicted value

Adaptive predictor c

b

a

x

d

Gradients Predictor compressed bitstream

input sample

+

reconstructed sample

Adaptive error dequantization Modeler

Coder

Figure 7. Block diagram of modified JPEG-LS algorithm.

In general, the encoder consists of two parts: source modeler and entropy coder. The JPEG-LS modeler is composed of fixed and adaptive predictor. The fixed predictor performs primitive edge detection test, while the adaptive part is a context-dependent integer additive term. As a fixed predictor the ‘median edge detector’ (MED) is used12, which is a simple predictor with rudimentary edge detection capabilities:

min(a, b), if c ≥ max(a, b) x MED = max(a, b), if c ≤ min(a, b) a + b − c, otherwise.

(1)

The predictor (1) switches between three simple predictors: it tends to select b in cases where a vertical edge exists left from the current location, a in cases of a horizontal edge above the current location, or a+b-c if no edge is detected. The obtained prediction value xMED is corrected by a context-dependent term calculated using previous image statistics. x pred = xMED + C[q] ,

(2)

where C is an array of context-depended correction terms; and q is an index of context in which input image sample x is occurring. The context is determined by local gradients, calculated using differences g1 = d − b , g 2 = b − c , g3 = c − a . Each difference is quantized into four approximately equiprobable intervals resulting in 365 contexts.

SPIE-IS&T/ Vol. 5678

205

The distribution of prediction residuals e = x − x pred can be very well approximated by two-side geometric distribution. It is then transformed into one-side geometric distribution, and efficiently encoded by extended Golomb-type codes, which are adaptively selected15. In our modification, we have embedded two additional blocks: Adaptive error quantization and Adaptive error dequantization. The first one performs error quantization according to the following equation:  ,   A[ x ] + 1 pred   

emod = round 

e

(3)

where A is an array of allowable encoding errors, which is available to the encoder and decoder. It is pre-computed in advance, by the procedure described in Section 4, for desired gamma, allowable reconstruction error and given precision. The modified error value emod is encoded by a Golomb coder into compressed bitstream. The second block calculates the reconstructed error erec by applying the inverse operation

(

)

erec = emod × A[ x pred ] + 1

(4)

The reconstructed image sample is then found using the prediction value xpred and the reconstructed error erec: xrec = x pred + erec .

(5)

The real image sample x is substituted by the reconstructed value xrec so that the later one will be used in prediction process for further image samples. In the decoder side, the encoded error value emod is extracted from the compressed bitstream by a Golomb decoder and a prediction value xpred is obtained using (1) and (2) based on the previously decoded image samples. The reconstructed error value erec and the reconstructed image sample xrec are obtained, in the same way as in the encoder, using (4) and (5) accordingly. In equations (3) and (4), the real image sample x should be used as an index for array of allowable errors A. However, the exact value x could not be used, because it is not available to the decoder side. Thus, we replace it with xpred, which is the best estimate of x, available to the encoder and decoder. For such substitutions, however, a violation may occur if A[x] is smaller than A[xpred]. As a result, the reconstruction error may exceed maximum allowable reconstruction error. An example is the step-like transition from very high intensity to low intensity (from white to black). We have observed that the percentage of such “failures” ranges from 1% to 3% for the test images used in our experiments. In our algorithm realization, this situation is handled as an exception. If the difference between the real image sample x and the reconstructed one xrec causes higher error than the allowable reconstruction error A[x], the encoder signals to the decoder by a special symbol and after that encodes the value which the reconstructed value xrec should be corrected to.

Figure 8. Histogram of prediction error and modified prediction error

206

SPIE-IS&T/ Vol. 5678

The histograms of the prediction residuals e and the modified prediction errors emod for “Marina” test image (cf. Figure 9) are shown in Figure 8. As it could be seen, the histogram of the modified prediction errors is narrower, which results to higher compression by the Golomb coder.

6. EXPERIMENTAL RESULTS We tested the proposed algorithm on real Bayer CFA raw data, captured by a digital camera with 10 bits/pixel resolution. All images include general content with both smooth areas and details. The dimension of the test images is 512×512 pixels. The grayscale thumbnails of the test images after reconstruction are shown in Figure 9. First, the structural transformation described in Section 2 was applied to the test CFA images. Then, the images were compressed by the modified JPEG-LS algorithm in lossless and near-lossless mode with allowable reconstruction errors of 4, 8 and 12. The compression results in terms of compression ratio are presented in Table 2. The compression ratio was calculated as follows: CR =

327680 , compressed file size

where 327680 is the required storage space in bytes for the original test data (512*512*10/8=327680 bytes).

Figure 9. Grayscale thumbnails of test images after reconstruction.

The visual quality of a decompressed image is demonstrated in Figure 10. It depicts images reconstructed from original data (Figure 10A) and from data compressed by the proposed method (Figure 10B). The allowable reconstruction error is 12 levels and gamma factor is 2.2. The resulting compression ratio for these settings is 3.61. The histogram of the

SPIE-IS&T/ Vol. 5678

207

difference between the two images is shown in Figure 10C. No error more than 12 and no visual difference between images is observed. Table 2. Compression results for lossless compression and near-lossless compression with tolerated reconstruction error 4, 8 and 12. Image Lossless comp. Near lossless, ∆ rec = 8 Near lossless, ∆ rec = 12 Near lossless, ∆ rec = 4 Beer 1.8006 2.4571 3.0872 3.6251 Car 1.6096 2.2911 2.8474 3.3068 Crossroad 1.6501 2.3645 2.9582 3.4557 Marina 1.6928 2.4427 3.0783 3.6184 Sofa 2.0294 2.8867 3.7287 4.5586 Table 1.6311 2.4260 3.0683 3.5652 Average 1.7356 2.4780 3.1280 3.6883

Figure 10. A) Image reconstructed from original CFA data B) image reconstructed from decompressed CFA data. C) Histogram of difference between them.

Figure 11. Average memory requirement for storing Bayer CFA data for different techniques: 1 - no compression, 10bpp; 2 - lossless compression; 3 – near-lossless with tolerated reconstruction error 4; 4- near-lossless with tolerated reconstruction error 8; 5 – nearlossless with tolerated reconstruction error 12.

Figure 11 illustrates the space saving achieved by applying near-lossless compression versus lossless compression and uncompressed data representation. A strictly lossless compression method provides a reduction of the required storage space by factor of 1.73, compare to uncompressed data. A compression ratio of 3.68 is achievable with a near-lossless method. This is in 2.12 times better than that one accessible with strictly lossless compression. Better accuracy might be

208

SPIE-IS&T/ Vol. 5678

required for some application and can be easily achieved by selecting smaller allowable reconstruction error, for the price of some increasing of the resulting file size.

CONCLUSIONS In many imaging applications saving of unreconstructed raw CFA data (‘digital negatives’) is preferable. This problem has motivated our attempts to find an efficient way to store the CFA data. We have studied the near-lossless compression approach as it allows higher compression ratio than any strictly lossless compression algorithm for the price of some small and controllable error while introducing no visual distortions. In our algorithm, we first pack the pixels of the same color in structures appropriate for the subsequent compression algorithm. We have modified the JPEG-LS algorithm in such a way that a nonlinear and adaptive error quantization function is embedded after the fixed and context adaptive predictors. As a result, the encoding error becomes function of the luminance level thus involving the gamma correction into the compression process. The algorithm can be adjusted for any pixel resolution, gamma value and allowable error range. It has been applied for compression of real-life CFA data and has shown rather promising results.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

Bryce E. Bayer, ‘Color Imaging Array’, US Patent 3 971 065, 1976. Sang-Yong Lee; Ortega, A. “A novel approach of image compression in digital cameras with a Bayer color filter array”, Proc. Int. Conf. Image Processing, 2001. Vol. 3 ,pp. 482 – 485 Toi, T.; Ohita, M. “A subband coding technique for image compression in single CCD cameras with Bayer color filter arrays” IEEE Trans. Consumer Electronics, vol. 45 Feb. 1999, pp. 176 – 180. Chin Chye Koh; Mukherjee, J.; Mitra, S.K “New efficient methods of image compression in digital cameras with color filter array”, IEEE Trans. Consumer Electronics, vol. 49, Nov. 2003, pp. 1448- 1456. Andriy Bazhyna, Atanas Gotchev, Karen Egiazarian “Lossless compression of Bayer pattern color filter arrays” to appears in Proc. of IS&t.SPIE Electronic Imaging symposium 16-20 January 2005 San Jose, California USA. Charles Poynton, A Technical Introduction to Digital Video New York: Wiley, 1996. Pennebaker W.B. and Mitchell J.L. JPEG Still Image Data Compression Standard, Van Nostrand Rainhold, 1993. Miano John Compressed Image File Formats: JPEG, PNG, GIF, XBM, BMP ACM Press, 1999. Roelofs Greg PNG: The Definitive Guide O'Reilly & Associates 1999 Web site of Computer Graphics Systems Development Corporation http://www.cgsd.com/papers/gamma_intro.html Girod, B. “The information theoretical significance of spatial and temporal masking in video signals” in Proc. SPIE Conf. Human Vision, Visual Processing and Digital Display, 1989, vol. 1077, pp178-187. Weinberger, M.J.; Seroussi, G.; Sapiro, G. “The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS”, IEEE Trans. Image Processing, vol. 9, Aug. 2000 pp. 1309 – 1324. X. Wu and N. D. Memon, “Context-based, adaptive, lossless image coding,” IEEE Trans. Commun., vol. 45, pp. 437–444, Apr. 1997. ftp://ftp.se.netbsd.org/pub/NetBSD/packages/distfiles/jpeg_ls_v2.2.tar.gz Merhav N., Seroussi G. and Weinberger M.J. “Coding of sources with two side geometric distribution ands and unknown parameters”, IEEE Trans. Inform. Theory, vol. 46, pp 229-236, Jan. 2000.

SPIE-IS&T/ Vol. 5678

209

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.