# EE381K Multidimensional Digital Signal Processing - Video Coding

Guest Lecture by Dr. Michael Horowitz, Applied Video Compression

• TBA.

## Supplemental Information

The above slides from Prof. Yao Wang are based on material from Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and Communications, Prentice Hall, 2002.

Here is more supplemental information:

• Lecture on Video Coding in HTML format. (Slides by Mr. Jong-il Kim.)
• Supplemental slides in HTML, PowerPoint, and PDF formats. (Slides by Mr. Zhou Wang.)
The following questions arose during the Spring 2003 presention by Serene Banerjee of the above slides by Prof. Wang. Here are the questions, as well as Serene's answers.
1. How is DPCM different from motion estimation/compensation coding?
For intraframe coding (i.e. coding macroblocks in the same frame) the steps are:
1. Differential pulse code modulation (DPCM) (to exploit spatial redundancy within a frame)
2. Transform coding (discrete cosine transform)
3. Quantization
For interframe coding (for example, coding the P-frame after coding the I-frame), the steps are:
1. Motion estimation (where the best matching macroblock is found in the next frame) (to exploit redundancy between frames)
2. Motion compensation (where one subtracts the current macroblock from the best matching macroblock in the previous frame); i.e., e(x,y,t) = I(x,y,t) - I(x-u,y-v,t-1) and u,v, are the motion vectors
So, DPCM is for coding blocks in the same frame, and motion estimation/compensation is for computing difference accross frames.

2. Are the motion vectors DCT coded and quantized?
No. (See previous answer for detailed explanation)

3. What is the difference between motion estimation and compensation?
Motion estimation is finding the best matching block and motion compensation is subtracting the best matching block (see the answer for question 1 for more details).

4. Does temporal scalability introduce errors?
In temporal scalability a base layer has the required I and P-frames needed for proper decoding. An enhacement layer can contain intermediate P or B-frames for temporal scalability. However, omiting these enhancement layer frames should not introduce additional errors in decoding the base layer. For a pictorial answer refer the following paper at:

Stephan Wagner, "Temporal Scalability Using P-pictures for Low-latency Applications", Proc. IEEE Workshop on Multimedia Signal Processing, 1998.

5. Does temporal scalability introduce delay?
Based on the previous answer, it should not introduce addtional delay in the base layer.

6. Can loop filtering be done only on the decoder side?
To remove blocking and ringing artifacts one can have
1. loop filtering during motion estimation/compensation or
2. post filtering only in the decoder.
Post filtering can be done only on the decoder end. But, loop filtering has to be done both on the encoder side and the decoder side. As loop filtering affects motion vectors it will introduce artifacts if it is only done on the decoder side. For more information, a good reference paper is:

Y. L. Lee and H. W. Park, "Loop-filtering and Post-filtering for Low Bit-rates Moving Picture Coding", Proc. IEEE Int. Conf. on Image Processing, vol. 1, pp. 94-98, Dec. 1999.

7. Is MPEG-7 standardized yet?
From the MPEG Website, it appears that the final committee draft for MPEG-7 is not public, yet. So, maybe they are in the final step of the standardization process.

Last updated 02/20/08. Send comments to bevans@ece.utexas.edu