#184 DXT1 decoding bugs

open
nobody
None
5
2009-08-05
2009-08-05
Anonymous
No

I have encountered some problems with DevIL while writing my texture tool. After analyzing il_dds.c I think there are at least 3 bugs in DXT1 decoding scheme, 2 of which I can locate, the third one is not yet aparrent to me.

Bug 1:
This is the function that converts colors from R5G6B5 to RGB8 in DevIL:

void DxtcReadColor(ILushort Data, Color8888* Out)
{
ILubyte r, g, b;

b = Data & 0x1f;
g = (Data & 0x7E0) >> 5;
r = (Data & 0xF800) >> 11;

Out->r = r << 3 | r >> 2;
Out->g = g << 2 | g >> 3;
Out->b = b << 3 | r >> 2;
}

It seems obvious from the code above, that the red & blue channels of the two block colors should always have it's 3 most significant bits equal to 3 least significant bits. But there are some blocks, which get decompressed to pixels with some other values. For example I get such values (after conversion to 8bit) for blue channel in some block:

164,170,175,181

The middle values are ok, 170=(164*2+181)/3 etc.

But 164 is wrong. Binary form of this number is: 10101001 and obviously this number could never have been created by the DxtcReadColor function.

Where is the problem? I really need to be able to convert decompressed image back to DXT1 without the need to write my own compressor or use external libraries, and for this I need to be able to revert that color calculations performed during DXT1 decoding. Is there some kind of postprocessing applied to DXT1 images after loading? Can I turn it off?

This is how i use DevIL: ?

ilCopyPixels(0, 0, 0, w, h, d, IL_BGRA, IL_UNSIGNED_BYTE, mips[0].dataBGRA8);

Bug 2:
Also I would like to note, that the function DxtcReadColor seems to have bug in green channel decoding. I think it should be:

Out->g = g << 2 | g >> 4;

not

Out->g = g << 2 | g >> 3;

because g is 6 bits wide and (g << 2 | g >> 3) destroys the lowest bit reducing accuracy. Is this intentional? Is it some kind of quasi conversion used instead of the proper one: (r * 255 + 16) / 31 ?

Bug3:
Inside DecompressDXT1:

else {
// Three-color block: derive the other color.
// 00 = color_0, 01 = color_1, 10 = color_2,
// 11 = transparent.
// These 2-bit codes correspond to the 2-bit fields
// stored in the 64-bit block.
colours[2].b = (colours[0].b + colours[1].b) / 2;
colours[2].g = (colours[0].g + colours[1].g) / 2;
colours[2].r = (colours[0].r + colours[1].r) / 2;
//colours[2].a = 0xFF;

colours[3].b = (colours[0].b + 2 * colours[1].b + 1) / 3;
colours[3].g = (colours[0].g + 2 * colours[1].g + 1) / 3;
colours[3].r = (colours[0].r + 2 * colours[1].r + 1) / 3;
colours[3].a = 0x00;
}

The code above is wrong, S3TC clearly states that the third color with alpha=0 should be black.

Discussion

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks