A parallel algorithm for lossless image compression by block matching

Share Embed


Descripción

A Parallel Algorithm for Lossless Image Compression by Block Matching Luigi Cinque and F ranco Liberati

Computer Science Department, University of Rome, Italy

Sergio De Agostino

Computer Science Dept., AASU, Savannah, GA31419, USA

Storer [1] generalized the LZ1 method to lossless image compression and suggested that v ery fast encoders are possible b yshowing a square greedy matching LZ1 compression heuristic, which can be implemented by a simple hashing scheme and achieves 60 to 70 percent of the compression of JBIG1 on the CCITT bi-level image test set. The image must be scanned in some linear order and in order to achieve a good compression performance, bidimensional matches hav e to be computed. A 64K table with one position for each possible 4x4 subarray is the only data structure used. All-zero and all-one squares are handled di erently. The encoding scheme is to precede each item with a ag eld indicating whether there is a monochromatic square, a match or ra w data. When there is a match, the 4x4 subarray in the current position is hashed to yield a pointer to a copy. This pointer is used for the current square greedy match and then replaced in the hash table b y a pointer to the current position. T o improv e the compression performance, Storer and Helfgott [2] in troduced a slo wer rectangle greedy matching technique requiring O(M log M ) time where M is the size of the match. Both heuristics work with an unrestricted window. In a previous work we implemented the rectangle greedy matching heuristic using a nite window and a bound to the match size and ac hieved 75 to 90 percent of the compression of JBIG1 on the CCITT bi-level image test set. Now we show a parallel algorithm using a rectangle greedy matching technique which requires a linear n umber of processors and O(log M log n) time on the PRAM EREW model. The algorithm is suitable for practical parallel architectures as a mesh of trees, a p yramid ora multigrid. We implemented a sequential procedure which simulates the compression performed by the parallel algorithm and it achieves 95 to 97 percent of the compression of the sequential heuristic mentioned above. To achieve logarithmic time we partition an m x n image 1=2 I in x x y rectangular areas where x and y are (log mn). In parallel for each area, one processor applies the sequential parsing algorithm so that in logarithmic time each area will be parsed in rectangles, some of which are monochromatic. Before encoding we compute larger monochromatic rectangles by merging the ones adjacent on the horizontal boundaries and then on the v ertical boundaries, doubling in this way the length and width of each area at each step.

References

[1] Storer J. A. [1996] \Lossless Image Compression Using Generalized LZ1-Type Methods", Pro c edeings IEEE Data Compression Conference, 290-299. [2] Storer J. A. and Helfgott H. [1997] \Lossless Image Compression by Block Matching", The Computer Journal 40, 137-145.

Proceedings of the DATA COMPRESSION CONFERENCE (DCC’02) 1068-0314/02 $17.00 © 2002 IEEE

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.