Hi,>> One thing for sure: You cannot remove errors due to quantization. >> The purpose of quantization is to drop information, and once the > information >> is lost, it is gone forever. What you can do is trying to minimalize the >> errors with respect to a certain class of images, that is, given a certain >> model of what "an image" is.> I guess you are right. Traditionally quantization error is regarded as white > noise... but I have two parts of error, one part is the customed designed > DCT, comparing with the perfect DCT,A DCT by itself doesn't cause "error" because it is a mathematically invertible operation. Thus, you could mean two different things here: i) They implemented a transformation that is close, but not identical to the DCT. That wouldn't be much of a loss if the operation itself remains invertible, i.e. the "error" is also in the decompressor. ii) They introduce ad-hoc round-off errors in a DCT implementation, making it lossy.> I guess it has an error, I guess I > should focus on minimizing this error;Is it feasible to fix this error? (Simplest possible approach).> another part is even for that perfect > DCT, it still has quantization error, I guess I won't able to get rid of > that, as you've pointed out.You cannot avoid it, but you can make it smaller at the cost of making the compression weaker (finer quantization, longer codestream, better quality).>> The question now is, what are your "free variables": Are you able/willing > to >> tune the quantizer? If not, then you can only work on the reconstruction > points >> of the dequantizer (which is not too much).> I am not very sure that I understand these points. Can you illuminate me a > little more in detail?Well, a (scalar) quantizer takes a input signal x (a "real number") and generates an integer from that indexing a set of intervals that cover R. The easiest splitting of R into intervals (thus, the easiest quantizer) would be to write R as [0,1) U [1,2) U [2,3) ... and so on. Here, the quantization is simply performed by rouding the number to the nearest integer. These intervals are called "buckets". On the decompressor, the bucket index is replaced by a value from that interval, called the "reconstruction point". Both are (usually) free variables of the quantizer: You might choose them within some limits, i.e. choose the intervals, choose the reconstruction points. Typical applications usually split the real axis into intervals of equal size with one possible exception, namely the bucket containing zero. (Often, but not always twice as large as the remaining buckets). Reconstruction points are usually picked in the middle of the interval (though this is not optimal). What can be proven is the following: Given you have a signal with a given statistics p(x) (p is the probability density of the signal x), then o) the boundaries of the quantization buckets must be mid-way between the reconstruction points, regardless of the signal, o) the reconstruction points must be at the "center of mass" of the buckets with respect to the probability density. That is: rec_point = \int_{bucket_lo}^{bucket_hi} x p(x) dx / length of bucket.>> Second question: What are you >> going to optimize: Visual quality, or PSNR? They are not identical, the >> "standard" tables are tuned for optimal visual quality, found out in > various >> experiments. >>> I guess "BOTH"...You cannot optimize both at once. You have to make compromizes. PSNR and visual quality are not controversal, but PSNR says less about visual quality than one might expect. So long, Thomas