Error Diffusion Halftoning Methods for High-Quality Printed and Displayed Images

Prof. Brian L. Evans
Department of Electrical and Computer Engineering
The University of Texas at Austin

Graduated Ph.D. Students: Dr. Niranjan Damera-Venkata (HP Labs) and Dr. Thomas D. Kite (Audio Precision)

Graduate Students: Mr. Vishal Monga

Other Collaborators: Prof. Alan C. Bovik (UT Austin) and Prof. Wilson S. Geisler (UT Austin)

Talk in Powerpoint and PDF formats

Halftoning Research at UT Austin - Halftoning Toolbox


Image halftoning converts a high-resolution image to a low-resolution image, e.g. a 24-bit color image to a three-bit color image or an 8-bit grayscale image to a binary image, for printing and display. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the error among neighboring grayscale pixels. Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. We describe approaches to compensate linear and nonlinear distortion based on observation that error diffusion is 2-D sigma-delta modulation (Anastassiou, 1989).

Following the 1-D sigma-delta work of Ardalan and Paulos (1988), we replace the thresholding quantizer with a scalar gain plus additive noise. The amount of sharpening is proportional to the scalar gain. Setting the sharpness control parameter in the threshold modulation approach of Eschbach and Knox (1991) can theoretically eliminate sharpening effects. We use unsharpened halftones in perceptually weighted SNR measures. We also use the sharpness control parameter to achieve rate-distortion tradeoffs in JBIG2 compression of error diffused halftones.

We generalize the approach for linear distortion compensation by using an adaptive threshold modulation framework. Using the framework, we adaptively optimize the hysteresis coefficients in green noise error diffusion of Levien (1993). For edge enhancement halftoning, we minimize linear distortion by adapting the sharpness control parameter. We break up directional artifacts by using a deterministic bit flipping quantizer, which was used by Magrath and Sandler (1997) in sigma-delta research.

Finally, we generalize our work to vector error diffusion (Haneishi et al. 1993) for color images. The scalar gain becomes a matrix gain. We apply an adaptive framework to optimize visual quality by using a linear color vision model. We evaluate four linear color vision models via subjective testing.

We recently released a freely distributable halftoning toolbox for Matlab.


Brian L. Evans is an Associate Professor in the Department of Electrical and Computer Engineering at The University of Texas at Austin. His research and teaching efforts are in embedded real-time signal and image processing systems. His group focuses on the design and real-time implementation of ADSL/VDSL transceivers, printer pipelines, digital still cameras, and 3-D sonar imaging systems. In printer pipelines, his group's primary contribution is in the design, analysis, and quality assessment of halftoning by error diffusion. Dr. Evans has published over 100 refereed conference and journal papers. His B.S.E.E.C.S. (1987) degree is from the Rose-Hulman Institute of Technology, and his M.S.E.E. (1988) and Ph.D.E.E. (1993) degrees are from the Georgia Institute of Technology. He is the recipient of a 1997 NSF CAREER Award.