Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG achieve only moderate compression ratios because they do not fully take into account the special image characteristics. In this paper, we present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the classical halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the sorted autocorrelation function of a part of the image. We have experimented with classical halftones of different resolutions and sizes and screened under different angles as well as with stochastic halftones. For classical halftones, the global improvement with respect to JBIG in its best mode is about 30% to 50%; binary tree modeling increases this by another 5% to 10%. For stochastic halftones, the autocorrelation-based template gives no improvement, though an exhaustive search technique shows that even bigger improvements are feasible using the context modeling technique; introducing binary tree modeling increases the compression ratio with about 10%.
Applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. We present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. For evaluating our scheme, we have also investigated the compression performance of a suboptimal approach based on an incremental exhaustive search; this approach can be used in practice for decompression but not for compression because of timing requirements. We have experimented with classical halftones of different resolutions and sizes and screened under different angles.
Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG only achieve moderate compression ratios due to the periodic dot structure of classical halftones. In this paper, we present two improvements on the context modeling scheme. Firstly, we adapt the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. Secondly, increasing the order of the model produces an additional gain in compression. We have experimented with classical halftones of different resolutions and sizes, screened under different angles, and produced by either a digital halftoning algorithm or a digital scanner. The additional coding time for context selection is negligible, while the decoding time remains the same. The global improvement with respect to JBIG (which has features like one adaptive pixel, histogram rescaling and multiresolution decomposition) is about 30% to 65%, depending on the image.
The recent expansion of facsimile application fields has increased the need to transmit at high speed not only black and white binary images for characters, but also half-tone images including high quality photographic images. The authors propose the periodic error diffusion method for realizing high quality half-tone images with highly efficient coding. This method has a high expression ability for both gray scale and resolution, and also moire patterns are not likely to occur for screened photographs. In addition, this method realizes a high coding efficiency, because the half-tone thresholded data has a periodic characteristic. They also propose the adaptive coding preprocessing method for thresholded mixed images including both characters and half-tone images.
This paper presents an efficient lossless data compression scheme for various kinds of binary images such as line drawings and half-tone pictures. The authors studied a combination of preprocessing and Lempel-Ziv universal coding. To improve the compression ratio, the authors modified Lempel-Ziv universal coding by dividing its dictionary into classified subdictionaries. They obtained improved compression ratios in computer simulation on composite images consisting of mixed text and images.
This paper presents a new approach using universal coding for various kinds of binary images, such as line-drawings and half-tones. The authors studied two types of preprocessing of universal coding for binary images, and found that for various kinds of line-drawings and half-tone (screen-dots) images, both preprocessors outperformed conventional schemes.
The medial axis transform (MAT) is a sparse representation of shape, which, being reversible, has potential for binary image compression. The MAT also provides structural information not accessible with alternative binary image codes. A MAT-based coding scheme which respects its structural properties is developed. The MAT-based coding method is applied to the CCITT facsimile test images. The resulting compression is comparable to that of known coding schemes for textual and somewhat better for graphical data. Some simple modifications to improve skeleton coding performance are suggested. To substantiate the claim that the MAT is effective for higher-level processing, the representation is extended to perform line segment and curve extraction. The extraction takes place under the control of a global error tolerance parameter and thus provides a new method to trade off data compression with distortion for binary images.
This paper presents new properties of the discrete morphological skeleton representation of binary images, along with a novel coding scheme for lossless binary image compression that is based on these properties. Following a short review of the theoretical background, two sets of new properties of the discrete morphological skeleton representation of binary images are proved. The first one leads to the conclusion that only the radii of skeleton points belonging to a subset of the ultimate erosions are needed for perfect reconstruction. This corresponds to a lossless sampling of the quench function. The second set of new properties is related to deterministic prediction of skeletonal information in a progressive transmission scheme. Based on the new properties, a novel coding scheme for binary images is presented. The proposed scheme is suitable for progressive transmission and fast implementation. Computer simulations, also presented, show that the proposed coding scheme substantially improves the results obtained by previous skeleton-based coders, and performs better than classical coders, including run-length/Huffman, quadtree, and chain coders. For facsimile images, its performance can be placed between the modified read (MR) method (K=4) and modified modified read (MMR) method.
In this paper, a level compression-based image representation (LCBIR) is presented. This new image representation method improves the bintree representation for compressing digital binary images. Then we present a fast search algorithm on the LCBIR, which can support fast search and query in pictorial database. Experimental results show that our search algorithm on the LCBIR is faster than the one on the bintree representation.
In this paper, we first present a variation of the 2D run-encoding, called the runlength Morton code encoding scheme, for compressing binary images. We then present efficient algorithms for manipulating set operations and performing conversions between the proposed encoding scheme and some well-known spatial data structures. The time complexities of set operations are linearly proportional to the size (number) of the runlength Morton codes and the time complexities of conversions are linearly proportional to the number of the nodes in the corresponding quadtree/bintree with respect to the runlength Morton codes.
When processing shape information derived from a noisy source such as a digital scanner, it is often useful to construct polygonal or curved outlines that match the input to within a specified tolerance and maximize some intuitive notions of smoothness, simplicity, and best fit. The outline description should also be concise enough to be competitive with binary image compression schemes. Otherwise, there will be a strong temptation to lose the advantages of the outline representation by converting back to a binary image format. This paper proposes a two-stage pipeline that provides separate control over the twin goals of smoothness and conciseness: the first stage produces a specification for a set of closed curves that minimize the number of inflections subject to a specified error bound; the second stage produces polygonal outlines that obey the specifications, have vertices on a given grid, and have nearly the minimum possible number of vertices. Both algorithms are reasonably fast in practice, and can be implemented largely with low-precision integer arithmetic.
This work proposes a method for using an identity-mapping backpropagation (IMBKP) neural network for binary image compression, aimed at reducing the dimension of the feature vector in a NN-based pattern recognition system. In the proposed method, the IMBKP network was trained with the objective of achieving good reconstruction quality and a reasonable amount of image compression. This criteria is very important, when using binary images as feature vectors. Evaluation of the proposed network was performed using 800 images of handwritten signatures. The lowest and highest reconstruction errors were, respectively, 3.05*10/sup -3/% and 0.01%. The proposed network can be used to reduce the dimension of the input vector to a NN-based pattern recognition system without almost any degradation and, yet, with a good reduction in the number of input neurons.
This paper explores compression schemes and error control coding for transmitting binary images over HF radio channels. A joint source-channel coding technique to investigate image quantisation using absolutely addressed picture elements (APEL) is discussed. This novel approach restricts error propagation and creates a robust data stream resilient to the problems inherent in HF channels.
Efficient compression of binary textual images is very important for applications such as document archiving and retrieval, digital libraries and facsimile. The basic property of a textual image is the repetitions of small character images and curves inside the document. Exploiting the redundancy of these repetitions is the key step in most of the coding algorithms. We use a similar compression method in the subband domain. Four different subband decomposition schemes are described and their performance on a textual image compression algorithm is examined. Experimentally, it is found that the described methods accomplish high compression ratios and they are suitable for fast database access and keyword search.
Composite methods for compressing binary images are studied. Hierarchical block coding is the main component in all of them. An attempt is made to increase the compression by augmenting the block coding by predictive coding and bit row reordering. The purpose is to increase the number of white pixels and all-white blocks. An error image is constructed from the differences between the predicted and original values of the pixels. The error image is then coded by hierarchical block coding, in which Huffman coding is used to encode the different bit patterns at the lowest level of the hierarchy. In the method, the global level dependencies are thus handled by block coding and the local pixel-to-pixel dependencies by Huffman coding.
A new binary image coding technique is presented. In the new technique, the black regions in the input image are first partitioned into a number of nonoverlapping rectangles. The partitioning can be achieved using one of two different algorithms. The first algorithm can partition the input image into the minimum number of nonoverlapping rectangles at the expense of complex implementation. The second algorithm gives a near optimal partitioning with very simple implementation. After the black regions of the input image are partitioned into rectangles, the coordinates of two opposite vertices of each rectangle are compressed using a simple procedure that allows the decoder to reconstruct the original image. Test images of different types, sizes, and complexities are used to demonstrate the improvement of the new technique over the other currently available binary image compression techniques.
The paper describes a new technique to compress binary images based on an image covering algorithm. The idea is that binary images can be always covered by rectangles, univocally described by a vertex and two adjacent edges (L-shape). Some optimisations are necessary to consider degenerate configurations. The method has been tested on several images representing drawings and typed texts. The comparison with existing image file compression techniques shows a good performance of our approach. Further optimisations are under development.
Finite context models improve the performance of chain based encoders to the point that they become attractive, alternative models for binary image compression. The resulting code is within 4% of JBIG at 200 dpi and is 9% more efficient at 400 dpi.
We propose a novel approach for the lossless compression of binary images using logic minimization. The image is divided into windows or blocks of size r*c pixels and each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube-based two-level logic minimizer. To reduce the bits required to encode the minimized cubes (product terms), a code set which satisfies the prefix property is used. If this technique fails to produce compression for a window, the pixels are stored as such. The main motivation of the work has been to investigate the potential of logic minimization as a tool for image data compression. Our technique outperforms UNIX compress in terms of compression ratio on most of the test images. The compression scheme is relatively slower while the decompression time is comparable to that of UNIX compress.
The use of arithmetic coding for binary image compression achieves a high compression ratio while the running time remains rather slow. The composite modelling method presented reduces the size of the data to be coded by arithmetic coding. The method is to code the uniform areas with less computation and apply arithmetic coding to the areas with more variation.
This paper presents some recent advances in the architecture for the data compression technique known as arithmetic coding. The new architecture employs loop unrolling and speculative execution of the inner loop of the algorithm to achieve a significant speed-up relative to the Q-coder architecture. This approach reduces the number of iterations required to compress a block of data by a factor that is on the order of the compression ratio. While the speed-up technique has been previously discovered independently by researchers at IBM, no systematic study of the architectural trade-offs has ever been published. For the CCITT facsimile documents, the new architecture achieves a speed-up of approximately seven compared to the IBM Q-coder when four lookahead units are employed in parallel. A structure for fast input/output processing based on run length pre-coding of the data stream to accompany the new architecture is also presented.
A two-stage modelling schema to be used together with arithmetic coding is proposed. The main motivation of the work has been the relatively slow operation of arithmetic coding. The new modelling schema reduces the use of arithmetic coding by applying to large white regions global modelling which consumes less time. This composite method works well and with a set of test images it took only about 41% of the time required by a QM-coder. At the same time the loss in compression ratio is only marginal.
A major problem with modern document processing systems which incorporate high resolution workstations for displaying images is that the resolution of the stored image is often far greater than the resolution at which it is to be displayed. In contemporary systems an image must be fully decompressed and then have its resolution transformed in order to be displayed as required. The authors present a novel technique for performing compression of binary images which uses progressive encoding. The compression is noiseless, and the progressive encoding allows a low resolution rendition of an image to be reconstructed at a minimal complexity.
A general technique for extending existing encoding methods is proposed which takes advantage of localised two-dimensional structure within an image. This is achieved by using small 2D patterns of adjacent pixel values in the algorithms. These local block patterns (LBPs) reflect the texture or granularity of the image contents. Non-uniform regions with structural regularity or periodicity are well catered for and local rather than global coherence is exploited. Some established binary image compression methods have been applied to a range of images with differing structural properties. The methods were subsequently adapted to use LBPs. Results comparing all these methods are discussed.
Lists published papers that specifically relate to the compression of digitized binary images from graphics data. By graphics data the author means printed matter, handwriting, line drawings, or any image data that is basically two-level: information on background. The data compression covered here assumes information reduction to images with two levels of signal amplitude followed by redundancy reduction of these binary (i.e. black/white) images.
Last Updated 06/26/99.