EE381K Multidimensional Digital Signal Processing - Video Coding
Guest Lecture by Dr. Michael Horowitz,
Applied Video Compression
The above slides from Prof. Yao Wang are based on material from
Y. Wang, J. Ostermann, and Y.-Q. Zhang,
and Communications, Prentice Hall, 2002.
Here is more supplemental information:
The following questions arose during the Spring 2003 presention
by Serene Banerjee of the above slides by Prof. Wang. Here are
the questions, as well as Serene's answers.
- Lecture on Video Coding in
(Slides by Mr. Jong-il Kim.)
- Supplemental slides in
(Slides by Mr. Zhou Wang.)
- How is DPCM different from motion estimation/compensation coding?
For intraframe coding (i.e. coding macroblocks in the same frame)
the steps are:
For interframe coding (for example, coding the P-frame after coding the
I-frame), the steps are:
- Differential pulse code modulation (DPCM) (to exploit spatial
redundancy within a frame)
- Transform coding (discrete cosine transform)
So, DPCM is for coding blocks in the same frame, and motion
estimation/compensation is for computing difference accross frames.
- Motion estimation (where the best matching macroblock is found in
the next frame) (to exploit redundancy between frames)
- Motion compensation (where one subtracts the current macroblock
from the best matching macroblock in the previous frame); i.e.,
e(x,y,t) = I(x,y,t) - I(x-u,y-v,t-1) and u,v, are the motion vectors
- Are the motion vectors DCT coded and quantized?
No. (See previous answer for detailed explanation)
- What is the difference between motion estimation and compensation?
Motion estimation is finding the best matching block and motion
compensation is subtracting the best matching block (see the
answer for question 1 for more details).
- Does temporal scalability introduce errors?
In temporal scalability a base layer has the required I and P-frames
needed for proper decoding. An enhacement layer can contain intermediate
P or B-frames for temporal scalability. However, omiting these enhancement
layer frames should not introduce additional errors in decoding the base
layer. For a pictorial answer refer the following paper at:
"Temporal Scalability Using P-pictures for Low-latency
Applications", Proc. IEEE Workshop on Multimedia Signal Processing,
- Does temporal scalability introduce delay?
Based on the previous answer, it should not introduce addtional delay in
the base layer.
- Can loop filtering be done only on the decoder side?
To remove blocking and ringing artifacts one can have
Post filtering can be done only on the decoder end. But, loop
filtering has to be done both on the encoder side and the decoder side.
As loop filtering affects motion vectors it will introduce artifacts
if it is only done on the decoder side. For more information, a good
reference paper is:
- loop filtering during motion estimation/compensation or
- post filtering only in the decoder.
Y. L. Lee and H. W. Park, "Loop-filtering and Post-filtering for Low
Bit-rates Moving Picture Coding", Proc. IEEE Int. Conf. on Image
Processing, vol. 1, pp. 94-98, Dec. 1999.
- Is MPEG-7 standardized yet?
From the MPEG
Website, it appears that the final committee draft for MPEG-7
is not public, yet.
So, maybe they are in the final step of the standardization process.
Last updated 02/20/08.
Send comments to