|
From: Charles 'B. K. <kr...@cs...> - 2000-08-01 22:50:36
|
Today I did some fairly extensive testing of the idct code in libdv. Before this, I thought the 248 idct was the source of the noisy blocks people were noticing. I now conclude it has to be something else. My test harness consists of two main pieces. The first is a modified playdv that reads a dv movie and dumps the coefficients of every DCT block to a text file. Each block in the file is also identified as either 88 or 248. The second part is a program which reads the coefficient file and feeds each block to two versions of the idct routine. One is the integer idct used in libdv, the other is a brute force floating point based directly on the standard iDCT equations. The program dumps the difference between the two results for each block. The integer 248 routine is extremely accurate, with only occasional pixel values off by 1. The integer 88 is less accurate, with values often off by one, and sometimes off by at most as 3. This is not suprising, since the 88 routine is mmx and uses 16 bit math. While debugging, I also checked many blocks by hand via matlab. After all that, looking real close at the video with 248 blocks identified (hard coded to display white) shows that bad messed up blocks are highly correlated with the 248s, but there are enough 88s messed up too. So the problem has to lie somewhere else: parse, quantize or weight. I'm pretty confident in the parser, as it has fairly extensive scaffolding, which should reveal errors quickly (if parsing when wrong, vlc errors should be obvious). But, I will recheck anyway. Another clue. Turning off the various hand coded assembler routines doesn't seem to make a difference. What I really need now is a way to get some test input and the YUV out of a known "good" decoder. -- Buck |