Literature search on bi-level image compression

Brian L. Evans

Halftone Compression

  1. Denecker, K.; De Neve, P.; Lemahieu, I. Binary tree context modeling of halftone images using a fast adaptive template selection scheme, Proc. SPIE Electronic Imaging: Processing, Printing, and Publishing in Color, May 18-20, 1998, Zurich, Switzerland, vol. 3409, pp. 230-241.

    Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG achieve only moderate compression ratios because they do not fully take into account the special image characteristics. In this paper, we present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the classical halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the sorted autocorrelation function of a part of the image. We have experimented with classical halftones of different resolutions and sizes and screened under different angles as well as with stochastic halftones. For classical halftones, the global improvement with respect to JBIG in its best mode is about 30% to 50%; binary tree modeling increases this by another 5% to 10%. For stochastic halftones, the autocorrelation-based template gives no improvement, though an exhaustive search technique shows that even bigger improvements are feasible using the context modeling technique; introducing binary tree modeling increases the compression ratio with about 10%.

  2. Denecker, K.; De Neve, P.; Lemahieu, I. Improved lossless halftone image coding using a fast adaptive context template selection scheme. Proc. IEEE Data Compression Conf., Snowbird, UT, 30 March-1 April 1998, pp. 541.

    Applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. We present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. For evaluating our scheme, we have also investigated the compression performance of a suboptimal approach based on an incremental exhaustive search; this approach can be used in practice for decompression but not for compression because of timing requirements. We have experimented with classical halftones of different resolutions and sizes and screened under different angles.

  3. Denecker, K.; Van Assche, S.; Lemahieu, I. A fast autocorrelation based context template selection scheme for lossless compression of halftone images, Proc. SPIE Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts, San Jose, CA, USA, 28-30 Jan. 1998, vol. 3300, pp. 262-72.

    Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG only achieve moderate compression ratios due to the periodic dot structure of classical halftones. In this paper, we present two improvements on the context modeling scheme. Firstly, we adapt the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. Secondly, increasing the order of the model produces an additional gain in compression. We have experimented with classical halftones of different resolutions and sizes, screened under different angles, and produced by either a digital halftoning algorithm or a digital scanner. The additional coding time for context selection is negligible, while the decoding time remains the same. The global improvement with respect to JBIG (which has features like one adaptive pixel, histogram rescaling and multiresolution decomposition) is about 30% to 65%, depending on the image.

  4. Hongu, T.; Kusaka, N.; Omachi, T.; Takashima, Y. Data compression of half-tone images by periodic error diffusion method. NEC Research and Development, April 1992, vol.33, (no.2):237-45.

    The recent expansion of facsimile application fields has increased the need to transmit at high speed not only black and white binary images for characters, but also half-tone images including high quality photographic images. The authors propose the periodic error diffusion method for realizing high quality half-tone images with highly efficient coding. This method has a high expression ability for both gray scale and resolution, and also moire patterns are not likely to occur for screened photographs. In addition, this method realizes a high coding efficiency, because the half-tone thresholded data has a periodic characteristic. They also propose the adaptive coding preprocessing method for thresholded mixed images including both characters and half-tone images.

  5. Yoshida, S.; Okada, Y.; Nakano, Y.; Chiba, H.; and others. New binary image compression scheme for various kinds of images. Proc. SPIE Image Processing Algorithms and Techniques Conf., San Jose, CA, USA, 10-13 Feb. 1992, vol. 1657, pp. 124-128.

    This paper presents an efficient lossless data compression scheme for various kinds of binary images such as line drawings and half-tone pictures. The authors studied a combination of preprocessing and Lempel-Ziv universal coding. To improve the compression ratio, the authors modified Lempel-Ziv universal coding by dividing its dictionary into classified subdictionaries. They obtained improved compression ratios in computer simulation on composite images consisting of mixed text and images.

  6. Nakano, Y.; Chiba, H.; Okada, Y.; Yoshida, S.; and others. Study of binary image compression using universal coding, Proc. SPIE Visual Communications Conf., Nov. 11-13, 1991, vol. 1605, pt.1, vol. 874-8.

    This paper presents a new approach using universal coding for various kinds of binary images, such as line-drawings and half-tones. The authors studied two types of preprocessing of universal coding for binary images, and found that for various kinds of line-drawings and half-tone (screen-dots) images, both preprocessors outperformed conventional schemes.

Rate-Distortion for Binary Image Compression

  1. Brandt, J.W.; Jain, A.K.; Algazi, V.R. Medial axis representation and encoding of scanned documents. Journal of Visual Communication and Image Representation, June 1991, vol.2, no.2, pp. 51-65.

    The medial axis transform (MAT) is a sparse representation of shape, which, being reversible, has potential for binary image compression. The MAT also provides structural information not accessible with alternative binary image codes. A MAT-based coding scheme which respects its structural properties is developed. The MAT-based coding method is applied to the CCITT facsimile test images. The resulting compression is comparable to that of known coding schemes for textual and somewhat better for graphical data. Some simple modifications to improve skeleton coding performance are suggested. To substantiate the claim that the MAT is effective for higher-level processing, the representation is extended to perform line segment and curve extraction. The extraction takes place under the control of a global error tolerance parameter and thus provides a new method to trade off data compression with distortion for binary images.

Other Binary Image Compression Papers

  1. Kresch, R.; Malah, D. Skeleton-based morphological coding of binary images. IEEE Transactions on Image Processing, Oct. 1998, vol.7, (no.10):1387-99.

    This paper presents new properties of the discrete morphological skeleton representation of binary images, along with a novel coding scheme for lossless binary image compression that is based on these properties. Following a short review of the theoretical background, two sets of new properties of the discrete morphological skeleton representation of binary images are proved. The first one leads to the conclusion that only the radii of skeleton points belonging to a subset of the ultimate erosions are needed for perfect reconstruction. This corresponds to a lossless sampling of the quench function. The second set of new properties is related to deterministic prediction of skeletonal information in a progressive transmission scheme. Based on the new properties, a novel coding scheme for binary images is presented. The proposed scheme is suitable for progressive transmission and fast implementation. Computer simulations, also presented, show that the proposed coding scheme substantially improves the results obtained by previous skeleton-based coders, and performs better than classical coders, including run-length/Huffman, quadtree, and chain coders. For facsimile images, its performance can be placed between the modified read (MR) method (K=4) and modified modified read (MMR) method.

  2. Kuo-Liang Chung; Kuo-Bao Hong. Level compression-based image representation and its applications. Pattern Recognition, March 1998, vol.31, (no.3):327-32.

    In this paper, a level compression-based image representation (LCBIR) is presented. This new image representation method improves the bintree representation for compressing digital binary images. Then we present a fast search algorithm on the LCBIR, which can support fast search and query in pictorial database. Experimental results show that our search algorithm on the LCBIR is faster than the one on the bintree representation.

  3. Chi-Yen Huang; Kuo-Liang Chung. Manipulating images by using run-length Morton codes. Int. Journal of Pattern Recognition and Artificial Intelligence, Sept. 1997, vol.11, (no.6):889-906.

    In this paper, we first present a variation of the 2D run-encoding, called the runlength Morton code encoding scheme, for compressing binary images. We then present efficient algorithms for manipulating set operations and performing conversions between the proposed encoding scheme and some well-known spatial data structures. The time complexities of set operations are linearly proportional to the size (number) of the runlength Morton codes and the time complexities of conversions are linearly proportional to the number of the nodes in the corresponding quadtree/bintree with respect to the runlength Morton codes.

  4. Hobby, J.D. Space-efficient outlines from image data via vertex minimization and grid constraints. Graphical Models and Image Processing, March 1997, vol.59, (no.2):73-88.

    When processing shape information derived from a noisy source such as a digital scanner, it is often useful to construct polygonal or curved outlines that match the input to within a specified tolerance and maximize some intuitive notions of smoothness, simplicity, and best fit. The outline description should also be concise enough to be competitive with binary image compression schemes. Otherwise, there will be a strong temptation to lose the advantages of the outline representation by converting back to a binary image format. This paper proposes a two-stage pipeline that provides separate control over the twin goals of smoothness and conciseness: the first stage produces a specification for a set of closed curves that minimize the number of inflections subject to a specified error bound; the second stage produces polygonal outlines that obey the specifications, have vertices on a given grid, and have nearly the minimum possible number of vertices. Both algorithms are reasonably fast in practice, and can be implemented largely with low-precision integer arithmetic.

  5. Murshed, N.A.; Bortolozzi, F.; Sabourin, R. Binary image compression using an identity-mapping backpropagation neural network, Proc. SPIE Applications of Artificial Neural Networks in Image Processing, San Jose, CA, Feb. 12-13, 1997, vol. 3030, pp. 29-35.

    This work proposes a method for using an identity-mapping backpropagation (IMBKP) neural network for binary image compression, aimed at reducing the dimension of the feature vector in a NN-based pattern recognition system. In the proposed method, the IMBKP network was trained with the objective of achieving good reconstruction quality and a reasonable amount of image compression. This criteria is very important, when using binary images as feature vectors. Evaluation of the proposed network was performed using 800 images of handwritten signatures. The lowest and highest reconstruction errors were, respectively, 3.05*10/sup -3/% and 0.01%. The proposed network can be used to reduce the dimension of the input vector to a NN-based pattern recognition system without almost any degradation and, yet, with a good reduction in the number of input neurons.

  6. Chippendale, P.; Honary, B.; Arthur, P.; Maundrell, M. Binary image compression and coding techniques for HF radio transmission. Proc. IEE Int. Conf. on HF Radio Systems and Techniques, London, UK, 1997, p. 130-4.

    This paper explores compression schemes and error control coding for transmitting binary images over HF radio channels. A joint source-channel coding technique to investigate image quantisation using absolutely addressed picture elements (APEL) is discussed. This novel approach restricts error propagation and creates a robust data stream resilient to the problems inherent in HF channels.

  7. Gerek, O.N.; Cetin, A.E.; Tewfik, A.H. Subband coding of binary textual images for document retrieval, Proc. IEEE Int. Conf. on Image Processing, Lausanne, Switzerland, 16-19 Sept. 1996, vol. 2, p. 899-902.

    Efficient compression of binary textual images is very important for applications such as document archiving and retrieval, digital libraries and facsimile. The basic property of a textual image is the repetitions of small character images and curves inside the document. Exploiting the redundancy of these repetitions is the key step in most of the coding algorithms. We use a similar compression method in the subband domain. Four different subband decomposition schemes are described and their performance on a textual image compression algorithm is examined. Experimentally, it is found that the described methods accomplish high compression ratios and they are suitable for fast database access and keyword search.

  8. Franti, P.; Nevalainen, O. Compression of binary images by composite methods based on block coding. Journal of Visual Communication and Image Representation, Dec. 1995, vol.6, no. 4, pp. 366-77.

    Composite methods for compressing binary images are studied. Hierarchical block coding is the main component in all of them. An attempt is made to increase the compression by augmenting the block coding by predictive coding and bit row reordering. The purpose is to increase the number of white pixels and all-white blocks. An error image is constructed from the differences between the predicted and original values of the pixels. The error image is then coded by hierarchical block coding, in which Huffman coding is used to encode the different bit patterns at the lowest level of the hierarchy. In the method, the global level dependencies are thus handled by block coding and the local pixel-to-pixel dependencies by Huffman coding.

  9. Mohamed, S.A.; Fahmy, M.M. Binary image compression using efficient partitioning into rectangular regions. IEEE Transactions on Communications, May 1995, vol. 43, no. 5, pp. 1888-93.

    A new binary image coding technique is presented. In the new technique, the black regions in the input image are first partitioned into a number of nonoverlapping rectangles. The partitioning can be achieved using one of two different algorithms. The first algorithm can partition the input image into the minimum number of nonoverlapping rectangles at the expense of complex implementation. The second algorithm gives a near optimal partitioning with very simple implementation. After the black regions of the input image are partitioned into rectangles, the coordinates of two opposite vertices of each rectangle are compressed using a simple procedure that allows the decoder to reconstruct the original image. Test images of different types, sizes, and complexities are used to demonstrate the improvement of the new technique over the other currently available binary image compression techniques.

  10. Di Gesu, V.; Mantaci, S.; Tortorici, G. Compression of binary images based on covering. Proc. Int. Conf. on Computer Analysis of Images and Patterns, Prague, Czech Republic, 6-8 Sept. 1995, p. 568-73.

    The paper describes a new technique to compress binary images based on an image covering algorithm. The idea is that binary images can be always covered by rectangles, univocally described by a vertex and two adjacent edges (L-shape). Some optimisations are necessary to consider degenerate configurations. The method has been tested on several images representing drawings and typed texts. The comparison with existing image file compression techniques shows a good performance of our approach. Further optimisations are under development.

  11. Estes, R.R., Jr.; Algazi, V.R. Efficient error free chain coding of binary documents. Proc. IEEE Data Compression Conf., Snowbird, UT, 28-30 March 1995, pp. 122-31.

    Finite context models improve the performance of chain based encoders to the point that they become attractive, alternative models for binary image compression. The resulting code is within 4% of JBIG at 200 dpi and is 9% more efficient at 400 dpi.

  12. Augustine, J.; Wen Feng; Jacob, J. Logic minimization based approach for compressing image data. Proc. IEEE Int. Conf. on VLSI Design, New Delhi, India, 4-7 Jan. 1995, pp. 225-8.

    We propose a novel approach for the lossless compression of binary images using logic minimization. The image is divided into windows or blocks of size r*c pixels and each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube-based two-level logic minimizer. To reduce the bits required to encode the minimized cubes (product terms), a code set which satisfies the prefix property is used. If this technique fails to produce compression for a window, the pixels are stored as such. The main motivation of the work has been to investigate the potential of logic minimization as a tool for image data compression. Our technique outperforms UNIX compress in terms of compression ratio on most of the test images. The compression scheme is relatively slower while the decompression time is comparable to that of UNIX compress.

  13. Franti, P. A fast and efficient compression method for binary images. Signal Processing: Image Communication, March 1994, vol.6, no. 1, pp. 69-76.

    The use of arithmetic coding for binary image compression achieves a high compression ratio while the running time remains rather slow. The composite modelling method presented reduces the size of the data to be coded by arithmetic coding. The method is to code the uniform areas with less computation and apply arithmetic coding to the areas with more variation.

  14. Feygin, G.; Gulak, P.G.; Chow, P. Architectural advances in the VLSI implementation of arithmetic coding for binary image compression. Proc. IEEE Data Compression Conf., Snowbird, UT, USA, 29-31 March 1994, pp. 254-63.

    This paper presents some recent advances in the architecture for the data compression technique known as arithmetic coding. The new architecture employs loop unrolling and speculative execution of the inner loop of the algorithm to achieve a significant speed-up relative to the Q-coder architecture. This approach reduces the number of iterations required to compress a block of data by a factor that is on the order of the compression ratio. While the speed-up technique has been previously discovered independently by researchers at IBM, no systematic study of the architectural trade-offs has ever been published. For the CCITT facsimile documents, the new architecture achieves a speed-up of approximately seven compared to the IBM Q-coder when four lookahead units are employed in parallel. A structure for fast input/output processing based on run length pre-coding of the data stream to accompany the new architecture is also presented.

  15. Franti, P.; Nevalainen, O. A two-stage modelling method for compressing binary images by arithmetic coding, Computer Journal, 1993, vol.36, (no.7):615-22.

    A two-stage modelling schema to be used together with arithmetic coding is proposed. The main motivation of the work has been the relatively slow operation of arithmetic coding. The new modelling schema reduces the use of arithmetic coding by applying to large white regions global modelling which consumes less time. This composite method works well and with a set of test images it took only about 41% of the time required by a QM-coder. At the same time the loss in compression ratio is only marginal.

  16. Ireton, M.A.; Xydeas, C.S. A progressive encoding technique for binary images. IEE Colloquium on Low Bit Rate Image Coding, May 11, 1990, p. 11/1-4.

    A major problem with modern document processing systems which incorporate high resolution workstations for displaying images is that the resolution of the stored image is often far greater than the resolution at which it is to be displayed. In contemporary systems an image must be fully decompressed and then have its resolution transformed in order to be displayed as required. The authors present a novel technique for performing compression of binary images which uses progressive encoding. The compression is noiseless, and the progressive encoding allows a low resolution rendition of an image to be reconstructed at a minimal complexity.

  17. Maeder, A.J. Local block pattern methods for binary image encoding. Ausgraph Proceedings, Parkville, Vic., Australia: Australasian Comput. Graphics Assoc, 1988, p. 251-6.

    A general technique for extending existing encoding methods is proposed which takes advantage of localised two-dimensional structure within an image. This is achieved by using small 2D patterns of adjacent pixel values in the algorithms. These local block patterns (LBPs) reflect the texture or granularity of the image contents. Non-uniform regions with structural regularity or periodicity are well catered for and local rather than global coherence is exploited. Some established binary image compression methods have been applied to a range of images with differing structural properties. The methods were subsequently adapted to use LBPs. Results comparing all these methods are discussed.

  18. Arps, R.B. Bibliography on binary image compression. Proceedings of the IEEE, July 1980, vol.68, no. 7, pp. 922-4.

    Lists published papers that specifically relate to the compression of digitized binary images from graphics data. By graphics data the author means printed matter, handwriting, line drawings, or any image data that is basically two-level: information on background. The data compression covered here assumes information reduction to images with two levels of signal amplitude followed by redundancy reduction of these binary (i.e. black/white) images.

Last Updated 06/26/99.